00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 600 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3265 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.111 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.112 The recommended git tool is: git 00:00:00.112 using credential 00000000-0000-0000-0000-000000000002 00:00:00.114 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.142 Fetching changes from the remote Git repository 00:00:00.144 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.169 Using shallow fetch with depth 1 00:00:00.169 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.169 > git --version # timeout=10 00:00:00.195 > git --version # 'git version 2.39.2' 00:00:00.195 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.221 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.221 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.388 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.398 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.408 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.408 > git config core.sparsecheckout # timeout=10 00:00:06.418 > git read-tree -mu HEAD # timeout=10 00:00:06.433 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.451 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.451 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:06.534 [Pipeline] Start of Pipeline 00:00:06.549 [Pipeline] library 00:00:06.551 Loading library shm_lib@master 00:00:06.551 Library shm_lib@master is cached. Copying from home. 00:00:06.567 [Pipeline] node 00:00:06.576 Running on VM-host-SM4 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:06.577 [Pipeline] { 00:00:06.585 [Pipeline] catchError 00:00:06.586 [Pipeline] { 00:00:06.596 [Pipeline] wrap 00:00:06.603 [Pipeline] { 00:00:06.609 [Pipeline] stage 00:00:06.610 [Pipeline] { (Prologue) 00:00:06.626 [Pipeline] echo 00:00:06.627 Node: VM-host-SM4 00:00:06.631 [Pipeline] cleanWs 00:00:06.639 [WS-CLEANUP] Deleting project workspace... 00:00:06.639 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.644 [WS-CLEANUP] done 00:00:06.792 [Pipeline] setCustomBuildProperty 00:00:06.848 [Pipeline] httpRequest 00:00:06.870 [Pipeline] echo 00:00:06.872 Sorcerer 10.211.164.101 is alive 00:00:06.879 [Pipeline] httpRequest 00:00:06.883 HttpMethod: GET 00:00:06.884 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.884 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.885 Response Code: HTTP/1.1 200 OK 00:00:06.885 Success: Status code 200 is in the accepted range: 200,404 00:00:06.886 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.014 [Pipeline] sh 00:00:08.300 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.319 [Pipeline] httpRequest 00:00:08.357 [Pipeline] echo 00:00:08.359 Sorcerer 10.211.164.101 is alive 00:00:08.369 [Pipeline] httpRequest 00:00:08.373 HttpMethod: GET 00:00:08.374 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:08.375 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:08.386 Response Code: HTTP/1.1 200 OK 00:00:08.387 Success: Status code 200 is in the accepted range: 200,404 00:00:08.387 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:23.573 [Pipeline] sh 00:01:23.853 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:26.393 [Pipeline] sh 00:01:26.670 + git -C spdk log --oneline -n5 00:01:26.670 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:26.670 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:26.670 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:26.670 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:26.670 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:26.690 [Pipeline] withCredentials 00:01:26.701 > git --version # timeout=10 00:01:26.713 > git --version # 'git version 2.39.2' 00:01:26.729 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:26.731 [Pipeline] { 00:01:26.738 [Pipeline] retry 00:01:26.740 [Pipeline] { 00:01:26.755 [Pipeline] sh 00:01:27.035 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:27.048 [Pipeline] } 00:01:27.073 [Pipeline] // retry 00:01:27.079 [Pipeline] } 00:01:27.101 [Pipeline] // withCredentials 00:01:27.113 [Pipeline] httpRequest 00:01:27.133 [Pipeline] echo 00:01:27.134 Sorcerer 10.211.164.101 is alive 00:01:27.145 [Pipeline] httpRequest 00:01:27.150 HttpMethod: GET 00:01:27.151 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:27.151 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:27.161 Response Code: HTTP/1.1 200 OK 00:01:27.161 Success: Status code 200 is in the accepted range: 200,404 00:01:27.162 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:35.708 [Pipeline] sh 00:01:36.008 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:37.397 [Pipeline] sh 00:01:37.681 + git -C dpdk log --oneline -n5 00:01:37.681 caf0f5d395 version: 22.11.4 00:01:37.681 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:37.681 dc9c799c7d vhost: fix missing spinlock unlock 00:01:37.681 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:37.681 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:37.699 [Pipeline] writeFile 00:01:37.715 [Pipeline] sh 00:01:37.995 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:38.006 [Pipeline] sh 00:01:38.284 + cat autorun-spdk.conf 00:01:38.284 SPDK_TEST_UNITTEST=1 00:01:38.284 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.284 SPDK_TEST_NVME=1 00:01:38.284 SPDK_TEST_BLOCKDEV=1 00:01:38.284 SPDK_RUN_ASAN=1 00:01:38.284 SPDK_RUN_UBSAN=1 00:01:38.284 SPDK_TEST_RAID5=1 00:01:38.284 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:38.284 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:38.284 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.290 RUN_NIGHTLY=1 00:01:38.292 [Pipeline] } 00:01:38.311 [Pipeline] // stage 00:01:38.328 [Pipeline] stage 00:01:38.330 [Pipeline] { (Run VM) 00:01:38.344 [Pipeline] sh 00:01:38.621 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:38.621 + echo 'Start stage prepare_nvme.sh' 00:01:38.621 Start stage prepare_nvme.sh 00:01:38.621 + [[ -n 5 ]] 00:01:38.621 + disk_prefix=ex5 00:01:38.621 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:01:38.621 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:01:38.621 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:01:38.621 ++ SPDK_TEST_UNITTEST=1 00:01:38.621 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.621 ++ SPDK_TEST_NVME=1 00:01:38.621 ++ SPDK_TEST_BLOCKDEV=1 00:01:38.621 ++ SPDK_RUN_ASAN=1 00:01:38.621 ++ SPDK_RUN_UBSAN=1 00:01:38.621 ++ SPDK_TEST_RAID5=1 00:01:38.621 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:38.621 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:38.621 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.621 ++ RUN_NIGHTLY=1 00:01:38.621 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:38.621 + nvme_files=() 00:01:38.621 + declare -A nvme_files 00:01:38.621 + backend_dir=/var/lib/libvirt/images/backends 00:01:38.621 + nvme_files['nvme.img']=5G 00:01:38.621 + nvme_files['nvme-cmb.img']=5G 00:01:38.621 + nvme_files['nvme-multi0.img']=4G 00:01:38.621 + nvme_files['nvme-multi1.img']=4G 00:01:38.621 + nvme_files['nvme-multi2.img']=4G 00:01:38.621 + nvme_files['nvme-openstack.img']=8G 00:01:38.621 + nvme_files['nvme-zns.img']=5G 00:01:38.621 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:38.621 + (( SPDK_TEST_FTL == 1 )) 00:01:38.621 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:38.621 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:38.621 + for nvme in "${!nvme_files[@]}" 00:01:38.621 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:38.621 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:38.621 + for nvme in "${!nvme_files[@]}" 00:01:38.621 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:38.621 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:38.621 + for nvme in "${!nvme_files[@]}" 00:01:38.621 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:38.621 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:38.621 + for nvme in "${!nvme_files[@]}" 00:01:38.621 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:38.621 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:38.621 + for nvme in "${!nvme_files[@]}" 00:01:38.621 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:38.621 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:38.621 + for nvme in "${!nvme_files[@]}" 00:01:38.621 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:38.888 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:38.888 + for nvme in "${!nvme_files[@]}" 00:01:38.888 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:38.888 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:38.888 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:38.888 + echo 'End stage prepare_nvme.sh' 00:01:38.888 End stage prepare_nvme.sh 00:01:38.903 [Pipeline] sh 00:01:39.188 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:39.188 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -H -a -v -f ubuntu2204 00:01:39.188 00:01:39.188 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:01:39.188 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:01:39.188 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:01:39.188 HELP=0 00:01:39.188 DRY_RUN=0 00:01:39.188 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img, 00:01:39.188 NVME_DISKS_TYPE=nvme, 00:01:39.188 NVME_AUTO_CREATE=0 00:01:39.188 NVME_DISKS_NAMESPACES=, 00:01:39.188 NVME_CMB=, 00:01:39.188 NVME_PMR=, 00:01:39.188 NVME_ZNS=, 00:01:39.188 NVME_MS=, 00:01:39.188 NVME_FDP=, 00:01:39.188 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:39.188 SPDK_VAGRANT_VMCPU=10 00:01:39.188 SPDK_VAGRANT_VMRAM=12288 00:01:39.188 SPDK_VAGRANT_PROVIDER=libvirt 00:01:39.188 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:39.188 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:39.188 SPDK_OPENSTACK_NETWORK=0 00:01:39.188 VAGRANT_PACKAGE_BOX=0 00:01:39.188 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:39.188 FORCE_DISTRO=true 00:01:39.188 VAGRANT_BOX_VERSION= 00:01:39.188 EXTRA_VAGRANTFILES= 00:01:39.188 NIC_MODEL=e1000 00:01:39.188 00:01:39.188 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:01:39.188 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:41.715 Bringing machine 'default' up with 'libvirt' provider... 00:01:42.281 ==> default: Creating image (snapshot of base box volume). 00:01:42.281 ==> default: Creating domain with the following settings... 00:01:42.281 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1720887553_b5efe84b26323d641638 00:01:42.281 ==> default: -- Domain type: kvm 00:01:42.281 ==> default: -- Cpus: 10 00:01:42.281 ==> default: -- Feature: acpi 00:01:42.281 ==> default: -- Feature: apic 00:01:42.281 ==> default: -- Feature: pae 00:01:42.281 ==> default: -- Memory: 12288M 00:01:42.281 ==> default: -- Memory Backing: hugepages: 00:01:42.281 ==> default: -- Management MAC: 00:01:42.281 ==> default: -- Loader: 00:01:42.281 ==> default: -- Nvram: 00:01:42.281 ==> default: -- Base box: spdk/ubuntu2204 00:01:42.281 ==> default: -- Storage pool: default 00:01:42.281 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1720887553_b5efe84b26323d641638.img (20G) 00:01:42.281 ==> default: -- Volume Cache: default 00:01:42.281 ==> default: -- Kernel: 00:01:42.281 ==> default: -- Initrd: 00:01:42.281 ==> default: -- Graphics Type: vnc 00:01:42.281 ==> default: -- Graphics Port: -1 00:01:42.281 ==> default: -- Graphics IP: 127.0.0.1 00:01:42.281 ==> default: -- Graphics Password: Not defined 00:01:42.281 ==> default: -- Video Type: cirrus 00:01:42.281 ==> default: -- Video VRAM: 9216 00:01:42.281 ==> default: -- Sound Type: 00:01:42.281 ==> default: -- Keymap: en-us 00:01:42.281 ==> default: -- TPM Path: 00:01:42.281 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:42.281 ==> default: -- Command line args: 00:01:42.281 ==> default: -> value=-device, 00:01:42.281 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:42.281 ==> default: -> value=-drive, 00:01:42.281 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:42.281 ==> default: -> value=-device, 00:01:42.281 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.539 ==> default: Creating shared folders metadata... 00:01:42.539 ==> default: Starting domain. 00:01:44.466 ==> default: Waiting for domain to get an IP address... 00:01:54.438 ==> default: Waiting for SSH to become available... 00:01:56.998 ==> default: Configuring and enabling network interfaces... 00:02:02.263 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:06.451 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:10.639 ==> default: Mounting SSHFS shared folder... 00:02:12.012 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:02:12.012 ==> default: Checking Mount.. 00:02:12.579 ==> default: Folder Successfully Mounted! 00:02:12.579 ==> default: Running provisioner: file... 00:02:13.147 default: ~/.gitconfig => .gitconfig 00:02:13.407 00:02:13.407 SUCCESS! 00:02:13.407 00:02:13.407 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:02:13.407 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:13.407 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:02:13.407 00:02:13.416 [Pipeline] } 00:02:13.437 [Pipeline] // stage 00:02:13.448 [Pipeline] dir 00:02:13.448 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:02:13.450 [Pipeline] { 00:02:13.466 [Pipeline] catchError 00:02:13.469 [Pipeline] { 00:02:13.484 [Pipeline] sh 00:02:13.792 + vagrant ssh-config --host vagrant 00:02:13.792 + sed -ne /^Host/,$p+ 00:02:13.792 tee ssh_conf 00:02:17.074 Host vagrant 00:02:17.074 HostName 192.168.121.59 00:02:17.074 User vagrant 00:02:17.074 Port 22 00:02:17.074 UserKnownHostsFile /dev/null 00:02:17.074 StrictHostKeyChecking no 00:02:17.074 PasswordAuthentication no 00:02:17.074 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:02:17.074 IdentitiesOnly yes 00:02:17.074 LogLevel FATAL 00:02:17.074 ForwardAgent yes 00:02:17.074 ForwardX11 yes 00:02:17.074 00:02:17.087 [Pipeline] withEnv 00:02:17.088 [Pipeline] { 00:02:17.103 [Pipeline] sh 00:02:17.376 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:17.376 source /etc/os-release 00:02:17.376 [[ -e /image.version ]] && img=$(< /image.version) 00:02:17.376 # Minimal, systemd-like check. 00:02:17.376 if [[ -e /.dockerenv ]]; then 00:02:17.376 # Clear garbage from the node's name: 00:02:17.376 # agt-er_autotest_547-896 -> autotest_547-896 00:02:17.376 # $HOSTNAME is the actual container id 00:02:17.376 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:17.376 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:17.376 # We can assume this is a mount from a host where container is running, 00:02:17.376 # so fetch its hostname to easily identify the target swarm worker. 00:02:17.376 container="$(< /etc/hostname) ($agent)" 00:02:17.376 else 00:02:17.376 # Fallback 00:02:17.376 container=$agent 00:02:17.376 fi 00:02:17.376 fi 00:02:17.376 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:17.376 00:02:17.645 [Pipeline] } 00:02:17.671 [Pipeline] // withEnv 00:02:17.680 [Pipeline] setCustomBuildProperty 00:02:17.706 [Pipeline] stage 00:02:17.709 [Pipeline] { (Tests) 00:02:17.730 [Pipeline] sh 00:02:18.009 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:18.283 [Pipeline] sh 00:02:18.562 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:18.831 [Pipeline] timeout 00:02:18.831 Timeout set to expire in 1 hr 30 min 00:02:18.833 [Pipeline] { 00:02:18.845 [Pipeline] sh 00:02:19.119 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:19.685 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:02:19.698 [Pipeline] sh 00:02:19.978 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:20.251 [Pipeline] sh 00:02:20.532 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:20.809 [Pipeline] sh 00:02:21.087 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:02:21.347 ++ readlink -f spdk_repo 00:02:21.347 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:21.347 + [[ -n /home/vagrant/spdk_repo ]] 00:02:21.347 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:21.347 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:21.347 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:21.347 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:21.347 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:21.347 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:02:21.347 + cd /home/vagrant/spdk_repo 00:02:21.347 + source /etc/os-release 00:02:21.347 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:02:21.347 ++ NAME=Ubuntu 00:02:21.347 ++ VERSION_ID=22.04 00:02:21.347 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:02:21.347 ++ VERSION_CODENAME=jammy 00:02:21.347 ++ ID=ubuntu 00:02:21.347 ++ ID_LIKE=debian 00:02:21.347 ++ HOME_URL=https://www.ubuntu.com/ 00:02:21.347 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:02:21.347 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:02:21.347 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:02:21.347 ++ UBUNTU_CODENAME=jammy 00:02:21.347 + uname -a 00:02:21.347 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:02:21.347 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:21.347 Hugepages 00:02:21.347 node hugesize free / total 00:02:21.347 node0 1048576kB 0 / 0 00:02:21.347 node0 2048kB 0 / 0 00:02:21.347 00:02:21.347 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:21.607 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:21.607 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:21.607 + rm -f /tmp/spdk-ld-path 00:02:21.607 + source autorun-spdk.conf 00:02:21.607 ++ SPDK_TEST_UNITTEST=1 00:02:21.607 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.607 ++ SPDK_TEST_NVME=1 00:02:21.607 ++ SPDK_TEST_BLOCKDEV=1 00:02:21.607 ++ SPDK_RUN_ASAN=1 00:02:21.607 ++ SPDK_RUN_UBSAN=1 00:02:21.607 ++ SPDK_TEST_RAID5=1 00:02:21.607 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:21.607 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:21.607 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:21.607 ++ RUN_NIGHTLY=1 00:02:21.607 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:21.607 + [[ -n '' ]] 00:02:21.607 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:21.607 + for M in /var/spdk/build-*-manifest.txt 00:02:21.607 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:21.607 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.607 + for M in /var/spdk/build-*-manifest.txt 00:02:21.607 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:21.607 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.607 ++ uname 00:02:21.607 + [[ Linux == \L\i\n\u\x ]] 00:02:21.607 + sudo dmesg -T 00:02:21.607 + sudo dmesg --clear 00:02:21.607 + dmesg_pid=2269 00:02:21.607 + sudo dmesg -Tw 00:02:21.607 + [[ Ubuntu == FreeBSD ]] 00:02:21.607 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.607 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.607 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:21.607 + [[ -x /usr/src/fio-static/fio ]] 00:02:21.607 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:21.607 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:21.607 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:21.607 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:21.607 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:21.607 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:21.607 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:21.607 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.607 Test configuration: 00:02:21.607 SPDK_TEST_UNITTEST=1 00:02:21.607 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.607 SPDK_TEST_NVME=1 00:02:21.607 SPDK_TEST_BLOCKDEV=1 00:02:21.607 SPDK_RUN_ASAN=1 00:02:21.607 SPDK_RUN_UBSAN=1 00:02:21.607 SPDK_TEST_RAID5=1 00:02:21.607 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:21.607 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:21.607 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:21.607 RUN_NIGHTLY=1 16:19:52 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:21.607 16:19:52 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:21.607 16:19:52 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:21.607 16:19:52 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:21.607 16:19:52 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:21.607 16:19:52 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:21.607 16:19:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:21.607 16:19:52 -- paths/export.sh@5 -- $ export PATH 00:02:21.607 16:19:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:21.607 16:19:52 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:21.607 16:19:52 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:21.867 16:19:52 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720887592.XXXXXX 00:02:21.867 16:19:52 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720887592.tzWmNQ 00:02:21.867 16:19:52 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:21.867 16:19:52 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:02:21.867 16:19:52 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:21.867 16:19:52 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:21.867 16:19:52 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:21.867 16:19:52 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:21.867 16:19:52 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:21.867 16:19:52 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:21.867 16:19:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.867 16:19:52 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:21.867 16:19:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:21.867 16:19:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:21.867 16:19:52 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:21.867 16:19:52 -- spdk/autobuild.sh@16 -- $ date -u 00:02:21.867 Sat Jul 13 16:19:52 UTC 2024 00:02:21.867 16:19:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:21.867 LTS-59-g4b94202c6 00:02:21.867 16:19:52 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:21.867 16:19:52 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:21.867 16:19:52 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:21.867 16:19:52 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:21.867 16:19:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.867 ************************************ 00:02:21.867 START TEST asan 00:02:21.867 ************************************ 00:02:21.867 using asan 00:02:21.867 16:19:52 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:02:21.867 00:02:21.867 real 0m0.001s 00:02:21.867 user 0m0.000s 00:02:21.867 sys 0m0.000s 00:02:21.867 16:19:52 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:21.867 16:19:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.867 ************************************ 00:02:21.867 END TEST asan 00:02:21.867 ************************************ 00:02:21.867 16:19:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:21.867 16:19:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:21.867 16:19:52 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:21.867 16:19:52 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:21.867 16:19:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.867 ************************************ 00:02:21.867 START TEST ubsan 00:02:21.867 ************************************ 00:02:21.867 using ubsan 00:02:21.867 16:19:52 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:21.867 00:02:21.867 real 0m0.000s 00:02:21.867 user 0m0.000s 00:02:21.867 sys 0m0.000s 00:02:21.867 16:19:52 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:21.867 16:19:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.867 ************************************ 00:02:21.867 END TEST ubsan 00:02:21.867 ************************************ 00:02:21.867 16:19:52 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:21.867 16:19:52 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:21.867 16:19:52 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:21.867 16:19:52 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:21.867 16:19:52 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:21.867 16:19:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.867 ************************************ 00:02:21.867 START TEST build_native_dpdk 00:02:21.867 ************************************ 00:02:21.867 16:19:52 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:21.867 16:19:52 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:21.867 16:19:52 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:21.867 16:19:52 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:21.867 16:19:52 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:21.867 16:19:52 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:21.867 16:19:52 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:21.867 16:19:52 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:21.867 16:19:52 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:21.867 16:19:52 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:21.867 16:19:52 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:21.867 16:19:52 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:21.867 16:19:52 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:21.867 16:19:52 -- common/autobuild_common.sh@68 -- $ compiler_version=11 00:02:21.867 16:19:52 -- common/autobuild_common.sh@69 -- $ compiler_version=11 00:02:21.867 16:19:52 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:21.867 16:19:52 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:21.867 16:19:52 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:21.867 16:19:52 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:21.867 16:19:52 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:21.867 16:19:52 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:21.867 caf0f5d395 version: 22.11.4 00:02:21.867 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:21.867 dc9c799c7d vhost: fix missing spinlock unlock 00:02:21.867 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:21.867 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:21.867 16:19:52 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:21.867 16:19:52 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:21.867 16:19:52 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:21.867 16:19:52 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:21.867 16:19:52 -- common/autobuild_common.sh@89 -- $ [[ 11 -ge 5 ]] 00:02:21.867 16:19:52 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:21.867 16:19:52 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:21.867 16:19:52 -- common/autobuild_common.sh@93 -- $ [[ 11 -ge 10 ]] 00:02:21.867 16:19:52 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:21.867 16:19:52 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:21.867 16:19:52 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:21.867 16:19:52 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:21.867 16:19:52 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:21.867 16:19:52 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:21.867 16:19:52 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:21.867 16:19:52 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:21.867 16:19:52 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:21.867 16:19:52 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:21.867 16:19:52 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:21.867 16:19:52 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:21.867 16:19:52 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:21.867 16:19:52 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:21.867 16:19:52 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:21.867 16:19:52 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:21.867 16:19:52 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:21.867 16:19:52 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:21.867 16:19:52 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:21.867 16:19:52 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:21.867 16:19:52 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:21.867 16:19:52 -- scripts/common.sh@343 -- $ case "$op" in 00:02:21.868 16:19:52 -- scripts/common.sh@344 -- $ : 1 00:02:21.868 16:19:52 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:21.868 16:19:52 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.868 16:19:52 -- scripts/common.sh@364 -- $ decimal 22 00:02:21.868 16:19:52 -- scripts/common.sh@352 -- $ local d=22 00:02:21.868 16:19:52 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:21.868 16:19:52 -- scripts/common.sh@354 -- $ echo 22 00:02:21.868 16:19:52 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:21.868 16:19:52 -- scripts/common.sh@365 -- $ decimal 21 00:02:21.868 16:19:52 -- scripts/common.sh@352 -- $ local d=21 00:02:21.868 16:19:52 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:21.868 16:19:52 -- scripts/common.sh@354 -- $ echo 21 00:02:21.868 16:19:52 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:21.868 16:19:52 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:21.868 16:19:52 -- scripts/common.sh@366 -- $ return 1 00:02:21.868 16:19:52 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:21.868 patching file config/rte_config.h 00:02:21.868 Hunk #1 succeeded at 60 (offset 1 line). 00:02:21.868 16:19:52 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:21.868 16:19:52 -- common/autobuild_common.sh@178 -- $ uname -s 00:02:21.868 16:19:52 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:21.868 16:19:52 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:21.868 16:19:52 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:27.141 The Meson build system 00:02:27.141 Version: 1.4.0 00:02:27.141 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:27.141 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:27.141 Build type: native build 00:02:27.141 Program cat found: YES (/usr/bin/cat) 00:02:27.141 Project name: DPDK 00:02:27.141 Project version: 22.11.4 00:02:27.141 C compiler for the host machine: gcc (gcc 11.4.0 "gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:27.141 C linker for the host machine: gcc ld.bfd 2.38 00:02:27.141 Host machine cpu family: x86_64 00:02:27.141 Host machine cpu: x86_64 00:02:27.141 Message: ## Building in Developer Mode ## 00:02:27.141 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:27.141 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:27.141 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:27.141 Program objdump found: YES (/usr/bin/objdump) 00:02:27.141 Program python3 found: YES (/usr/bin/python3) 00:02:27.141 Program cat found: YES (/usr/bin/cat) 00:02:27.141 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:27.141 Checking for size of "void *" : 8 00:02:27.141 Checking for size of "void *" : 8 (cached) 00:02:27.141 Library m found: YES 00:02:27.141 Library numa found: YES 00:02:27.141 Has header "numaif.h" : YES 00:02:27.141 Library fdt found: NO 00:02:27.141 Library execinfo found: NO 00:02:27.141 Has header "execinfo.h" : YES 00:02:27.141 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:27.141 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:27.141 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:27.141 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:27.141 Run-time dependency openssl found: YES 3.0.2 00:02:27.141 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:27.141 Library pcap found: NO 00:02:27.141 Compiler for C supports arguments -Wcast-qual: YES 00:02:27.141 Compiler for C supports arguments -Wdeprecated: YES 00:02:27.141 Compiler for C supports arguments -Wformat: YES 00:02:27.141 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:27.141 Compiler for C supports arguments -Wformat-security: YES 00:02:27.141 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:27.141 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:27.141 Compiler for C supports arguments -Wnested-externs: YES 00:02:27.141 Compiler for C supports arguments -Wold-style-definition: YES 00:02:27.141 Compiler for C supports arguments -Wpointer-arith: YES 00:02:27.141 Compiler for C supports arguments -Wsign-compare: YES 00:02:27.141 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:27.141 Compiler for C supports arguments -Wundef: YES 00:02:27.141 Compiler for C supports arguments -Wwrite-strings: YES 00:02:27.141 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:27.141 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:27.141 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:27.141 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:27.141 Compiler for C supports arguments -mavx512f: YES 00:02:27.141 Checking if "AVX512 checking" compiles: YES 00:02:27.141 Fetching value of define "__SSE4_2__" : 1 00:02:27.141 Fetching value of define "__AES__" : 1 00:02:27.141 Fetching value of define "__AVX__" : 1 00:02:27.141 Fetching value of define "__AVX2__" : 1 00:02:27.141 Fetching value of define "__AVX512BW__" : 1 00:02:27.141 Fetching value of define "__AVX512CD__" : 1 00:02:27.141 Fetching value of define "__AVX512DQ__" : 1 00:02:27.141 Fetching value of define "__AVX512F__" : 1 00:02:27.141 Fetching value of define "__AVX512VL__" : 1 00:02:27.141 Fetching value of define "__PCLMUL__" : 1 00:02:27.141 Fetching value of define "__RDRND__" : 1 00:02:27.141 Fetching value of define "__RDSEED__" : 1 00:02:27.141 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:27.141 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:27.141 Message: lib/kvargs: Defining dependency "kvargs" 00:02:27.141 Message: lib/telemetry: Defining dependency "telemetry" 00:02:27.141 Checking for function "getentropy" : YES 00:02:27.142 Message: lib/eal: Defining dependency "eal" 00:02:27.142 Message: lib/ring: Defining dependency "ring" 00:02:27.142 Message: lib/rcu: Defining dependency "rcu" 00:02:27.142 Message: lib/mempool: Defining dependency "mempool" 00:02:27.142 Message: lib/mbuf: Defining dependency "mbuf" 00:02:27.142 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:27.142 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:27.142 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:27.142 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:27.142 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:27.142 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:27.142 Compiler for C supports arguments -mpclmul: YES 00:02:27.142 Compiler for C supports arguments -maes: YES 00:02:27.142 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:27.142 Compiler for C supports arguments -mavx512bw: YES 00:02:27.142 Compiler for C supports arguments -mavx512dq: YES 00:02:27.142 Compiler for C supports arguments -mavx512vl: YES 00:02:27.142 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:27.142 Compiler for C supports arguments -mavx2: YES 00:02:27.142 Compiler for C supports arguments -mavx: YES 00:02:27.142 Message: lib/net: Defining dependency "net" 00:02:27.142 Message: lib/meter: Defining dependency "meter" 00:02:27.142 Message: lib/ethdev: Defining dependency "ethdev" 00:02:27.142 Message: lib/pci: Defining dependency "pci" 00:02:27.142 Message: lib/cmdline: Defining dependency "cmdline" 00:02:27.142 Message: lib/metrics: Defining dependency "metrics" 00:02:27.142 Message: lib/hash: Defining dependency "hash" 00:02:27.142 Message: lib/timer: Defining dependency "timer" 00:02:27.142 Fetching value of define "__AVX2__" : 1 (cached) 00:02:27.142 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:27.142 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:27.142 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:27.142 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:27.142 Message: lib/acl: Defining dependency "acl" 00:02:27.142 Message: lib/bbdev: Defining dependency "bbdev" 00:02:27.142 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:27.142 Run-time dependency libelf found: YES 0.186 00:02:27.142 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:02:27.142 Message: lib/bpf: Defining dependency "bpf" 00:02:27.142 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:27.142 Message: lib/compressdev: Defining dependency "compressdev" 00:02:27.142 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:27.142 Message: lib/distributor: Defining dependency "distributor" 00:02:27.142 Message: lib/efd: Defining dependency "efd" 00:02:27.142 Message: lib/eventdev: Defining dependency "eventdev" 00:02:27.142 Message: lib/gpudev: Defining dependency "gpudev" 00:02:27.142 Message: lib/gro: Defining dependency "gro" 00:02:27.142 Message: lib/gso: Defining dependency "gso" 00:02:27.142 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:27.142 Message: lib/jobstats: Defining dependency "jobstats" 00:02:27.142 Message: lib/latencystats: Defining dependency "latencystats" 00:02:27.142 Message: lib/lpm: Defining dependency "lpm" 00:02:27.142 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:27.142 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:27.142 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:27.142 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:27.142 Message: lib/member: Defining dependency "member" 00:02:27.142 Message: lib/pcapng: Defining dependency "pcapng" 00:02:27.142 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:27.142 Message: lib/power: Defining dependency "power" 00:02:27.142 Message: lib/rawdev: Defining dependency "rawdev" 00:02:27.142 Message: lib/regexdev: Defining dependency "regexdev" 00:02:27.142 Message: lib/dmadev: Defining dependency "dmadev" 00:02:27.142 Message: lib/rib: Defining dependency "rib" 00:02:27.142 Message: lib/reorder: Defining dependency "reorder" 00:02:27.142 Message: lib/sched: Defining dependency "sched" 00:02:27.142 Message: lib/security: Defining dependency "security" 00:02:27.142 Message: lib/stack: Defining dependency "stack" 00:02:27.142 Has header "linux/userfaultfd.h" : YES 00:02:27.142 Message: lib/vhost: Defining dependency "vhost" 00:02:27.142 Message: lib/ipsec: Defining dependency "ipsec" 00:02:27.142 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:27.142 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:27.142 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:27.142 Message: lib/fib: Defining dependency "fib" 00:02:27.142 Message: lib/port: Defining dependency "port" 00:02:27.142 Message: lib/pdump: Defining dependency "pdump" 00:02:27.142 Message: lib/table: Defining dependency "table" 00:02:27.142 Message: lib/pipeline: Defining dependency "pipeline" 00:02:27.142 Message: lib/graph: Defining dependency "graph" 00:02:27.142 Message: lib/node: Defining dependency "node" 00:02:27.142 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:27.142 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:27.142 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:27.142 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:27.142 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:27.142 Compiler for C supports arguments -Wno-unused-value: YES 00:02:27.142 Compiler for C supports arguments -Wno-format: YES 00:02:27.142 Compiler for C supports arguments -Wno-format-security: YES 00:02:27.142 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:28.521 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:28.521 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:28.521 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:28.521 Fetching value of define "__AVX2__" : 1 (cached) 00:02:28.521 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:28.521 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:28.521 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:28.521 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:28.521 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:28.521 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:28.521 Program doxygen found: YES (/usr/bin/doxygen) 00:02:28.521 Configuring doxy-api.conf using configuration 00:02:28.521 Program sphinx-build found: NO 00:02:28.521 Configuring rte_build_config.h using configuration 00:02:28.521 Message: 00:02:28.521 ================= 00:02:28.521 Applications Enabled 00:02:28.521 ================= 00:02:28.521 00:02:28.521 apps: 00:02:28.521 pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, test-eventdev, 00:02:28.521 test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, test-security-perf, 00:02:28.521 00:02:28.521 00:02:28.521 Message: 00:02:28.521 ================= 00:02:28.521 Libraries Enabled 00:02:28.521 ================= 00:02:28.521 00:02:28.521 libs: 00:02:28.521 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:28.521 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:28.521 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:28.521 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:28.521 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:28.521 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:28.521 table, pipeline, graph, node, 00:02:28.521 00:02:28.521 Message: 00:02:28.521 =============== 00:02:28.521 Drivers Enabled 00:02:28.521 =============== 00:02:28.521 00:02:28.521 common: 00:02:28.521 00:02:28.521 bus: 00:02:28.521 pci, vdev, 00:02:28.521 mempool: 00:02:28.521 ring, 00:02:28.521 dma: 00:02:28.521 00:02:28.521 net: 00:02:28.521 i40e, 00:02:28.521 raw: 00:02:28.521 00:02:28.521 crypto: 00:02:28.521 00:02:28.521 compress: 00:02:28.521 00:02:28.521 regex: 00:02:28.521 00:02:28.521 vdpa: 00:02:28.521 00:02:28.521 event: 00:02:28.521 00:02:28.521 baseband: 00:02:28.521 00:02:28.521 gpu: 00:02:28.521 00:02:28.521 00:02:28.521 Message: 00:02:28.521 ================= 00:02:28.521 Content Skipped 00:02:28.521 ================= 00:02:28.521 00:02:28.521 apps: 00:02:28.521 dumpcap: missing dependency, "libpcap" 00:02:28.521 00:02:28.521 libs: 00:02:28.521 kni: explicitly disabled via build config (deprecated lib) 00:02:28.521 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:28.521 00:02:28.521 drivers: 00:02:28.521 common/cpt: not in enabled drivers build config 00:02:28.521 common/dpaax: not in enabled drivers build config 00:02:28.521 common/iavf: not in enabled drivers build config 00:02:28.521 common/idpf: not in enabled drivers build config 00:02:28.521 common/mvep: not in enabled drivers build config 00:02:28.521 common/octeontx: not in enabled drivers build config 00:02:28.521 bus/auxiliary: not in enabled drivers build config 00:02:28.521 bus/dpaa: not in enabled drivers build config 00:02:28.521 bus/fslmc: not in enabled drivers build config 00:02:28.521 bus/ifpga: not in enabled drivers build config 00:02:28.521 bus/vmbus: not in enabled drivers build config 00:02:28.521 common/cnxk: not in enabled drivers build config 00:02:28.521 common/mlx5: not in enabled drivers build config 00:02:28.521 common/qat: not in enabled drivers build config 00:02:28.521 common/sfc_efx: not in enabled drivers build config 00:02:28.521 mempool/bucket: not in enabled drivers build config 00:02:28.521 mempool/cnxk: not in enabled drivers build config 00:02:28.521 mempool/dpaa: not in enabled drivers build config 00:02:28.521 mempool/dpaa2: not in enabled drivers build config 00:02:28.521 mempool/octeontx: not in enabled drivers build config 00:02:28.521 mempool/stack: not in enabled drivers build config 00:02:28.521 dma/cnxk: not in enabled drivers build config 00:02:28.521 dma/dpaa: not in enabled drivers build config 00:02:28.521 dma/dpaa2: not in enabled drivers build config 00:02:28.521 dma/hisilicon: not in enabled drivers build config 00:02:28.521 dma/idxd: not in enabled drivers build config 00:02:28.521 dma/ioat: not in enabled drivers build config 00:02:28.521 dma/skeleton: not in enabled drivers build config 00:02:28.521 net/af_packet: not in enabled drivers build config 00:02:28.521 net/af_xdp: not in enabled drivers build config 00:02:28.521 net/ark: not in enabled drivers build config 00:02:28.521 net/atlantic: not in enabled drivers build config 00:02:28.521 net/avp: not in enabled drivers build config 00:02:28.521 net/axgbe: not in enabled drivers build config 00:02:28.521 net/bnx2x: not in enabled drivers build config 00:02:28.521 net/bnxt: not in enabled drivers build config 00:02:28.521 net/bonding: not in enabled drivers build config 00:02:28.521 net/cnxk: not in enabled drivers build config 00:02:28.521 net/cxgbe: not in enabled drivers build config 00:02:28.521 net/dpaa: not in enabled drivers build config 00:02:28.521 net/dpaa2: not in enabled drivers build config 00:02:28.521 net/e1000: not in enabled drivers build config 00:02:28.521 net/ena: not in enabled drivers build config 00:02:28.521 net/enetc: not in enabled drivers build config 00:02:28.521 net/enetfec: not in enabled drivers build config 00:02:28.521 net/enic: not in enabled drivers build config 00:02:28.521 net/failsafe: not in enabled drivers build config 00:02:28.522 net/fm10k: not in enabled drivers build config 00:02:28.522 net/gve: not in enabled drivers build config 00:02:28.522 net/hinic: not in enabled drivers build config 00:02:28.522 net/hns3: not in enabled drivers build config 00:02:28.522 net/iavf: not in enabled drivers build config 00:02:28.522 net/ice: not in enabled drivers build config 00:02:28.522 net/idpf: not in enabled drivers build config 00:02:28.522 net/igc: not in enabled drivers build config 00:02:28.522 net/ionic: not in enabled drivers build config 00:02:28.522 net/ipn3ke: not in enabled drivers build config 00:02:28.522 net/ixgbe: not in enabled drivers build config 00:02:28.522 net/kni: not in enabled drivers build config 00:02:28.522 net/liquidio: not in enabled drivers build config 00:02:28.522 net/mana: not in enabled drivers build config 00:02:28.522 net/memif: not in enabled drivers build config 00:02:28.522 net/mlx4: not in enabled drivers build config 00:02:28.522 net/mlx5: not in enabled drivers build config 00:02:28.522 net/mvneta: not in enabled drivers build config 00:02:28.522 net/mvpp2: not in enabled drivers build config 00:02:28.522 net/netvsc: not in enabled drivers build config 00:02:28.522 net/nfb: not in enabled drivers build config 00:02:28.522 net/nfp: not in enabled drivers build config 00:02:28.522 net/ngbe: not in enabled drivers build config 00:02:28.522 net/null: not in enabled drivers build config 00:02:28.522 net/octeontx: not in enabled drivers build config 00:02:28.522 net/octeon_ep: not in enabled drivers build config 00:02:28.522 net/pcap: not in enabled drivers build config 00:02:28.522 net/pfe: not in enabled drivers build config 00:02:28.522 net/qede: not in enabled drivers build config 00:02:28.522 net/ring: not in enabled drivers build config 00:02:28.522 net/sfc: not in enabled drivers build config 00:02:28.522 net/softnic: not in enabled drivers build config 00:02:28.522 net/tap: not in enabled drivers build config 00:02:28.522 net/thunderx: not in enabled drivers build config 00:02:28.522 net/txgbe: not in enabled drivers build config 00:02:28.522 net/vdev_netvsc: not in enabled drivers build config 00:02:28.522 net/vhost: not in enabled drivers build config 00:02:28.522 net/virtio: not in enabled drivers build config 00:02:28.522 net/vmxnet3: not in enabled drivers build config 00:02:28.522 raw/cnxk_bphy: not in enabled drivers build config 00:02:28.522 raw/cnxk_gpio: not in enabled drivers build config 00:02:28.522 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:28.522 raw/ifpga: not in enabled drivers build config 00:02:28.522 raw/ntb: not in enabled drivers build config 00:02:28.522 raw/skeleton: not in enabled drivers build config 00:02:28.522 crypto/armv8: not in enabled drivers build config 00:02:28.522 crypto/bcmfs: not in enabled drivers build config 00:02:28.522 crypto/caam_jr: not in enabled drivers build config 00:02:28.522 crypto/ccp: not in enabled drivers build config 00:02:28.522 crypto/cnxk: not in enabled drivers build config 00:02:28.522 crypto/dpaa_sec: not in enabled drivers build config 00:02:28.522 crypto/dpaa2_sec: not in enabled drivers build config 00:02:28.522 crypto/ipsec_mb: not in enabled drivers build config 00:02:28.522 crypto/mlx5: not in enabled drivers build config 00:02:28.522 crypto/mvsam: not in enabled drivers build config 00:02:28.522 crypto/nitrox: not in enabled drivers build config 00:02:28.522 crypto/null: not in enabled drivers build config 00:02:28.522 crypto/octeontx: not in enabled drivers build config 00:02:28.522 crypto/openssl: not in enabled drivers build config 00:02:28.522 crypto/scheduler: not in enabled drivers build config 00:02:28.522 crypto/uadk: not in enabled drivers build config 00:02:28.522 crypto/virtio: not in enabled drivers build config 00:02:28.522 compress/isal: not in enabled drivers build config 00:02:28.522 compress/mlx5: not in enabled drivers build config 00:02:28.522 compress/octeontx: not in enabled drivers build config 00:02:28.522 compress/zlib: not in enabled drivers build config 00:02:28.522 regex/mlx5: not in enabled drivers build config 00:02:28.522 regex/cn9k: not in enabled drivers build config 00:02:28.522 vdpa/ifc: not in enabled drivers build config 00:02:28.522 vdpa/mlx5: not in enabled drivers build config 00:02:28.522 vdpa/sfc: not in enabled drivers build config 00:02:28.522 event/cnxk: not in enabled drivers build config 00:02:28.522 event/dlb2: not in enabled drivers build config 00:02:28.522 event/dpaa: not in enabled drivers build config 00:02:28.522 event/dpaa2: not in enabled drivers build config 00:02:28.522 event/dsw: not in enabled drivers build config 00:02:28.522 event/opdl: not in enabled drivers build config 00:02:28.522 event/skeleton: not in enabled drivers build config 00:02:28.522 event/sw: not in enabled drivers build config 00:02:28.522 event/octeontx: not in enabled drivers build config 00:02:28.522 baseband/acc: not in enabled drivers build config 00:02:28.522 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:28.522 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:28.522 baseband/la12xx: not in enabled drivers build config 00:02:28.522 baseband/null: not in enabled drivers build config 00:02:28.522 baseband/turbo_sw: not in enabled drivers build config 00:02:28.522 gpu/cuda: not in enabled drivers build config 00:02:28.522 00:02:28.522 00:02:28.522 Build targets in project: 310 00:02:28.522 00:02:28.522 DPDK 22.11.4 00:02:28.522 00:02:28.522 User defined options 00:02:28.522 libdir : lib 00:02:28.522 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:28.522 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:28.522 c_link_args : 00:02:28.522 enable_docs : false 00:02:28.522 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:28.522 enable_kmods : false 00:02:28.522 machine : native 00:02:28.522 tests : false 00:02:28.522 00:02:28.522 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:28.522 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:28.522 16:19:58 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:28.522 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:28.522 [1/737] Generating lib/rte_kvargs_mingw with a custom command 00:02:28.522 [2/737] Generating lib/rte_telemetry_mingw with a custom command 00:02:28.522 [3/737] Generating lib/rte_telemetry_def with a custom command 00:02:28.522 [4/737] Generating lib/rte_kvargs_def with a custom command 00:02:28.522 [5/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:28.522 [6/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:28.781 [7/737] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:28.781 [8/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:28.781 [9/737] Linking static target lib/librte_kvargs.a 00:02:28.781 [10/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:28.781 [11/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:28.781 [12/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:28.781 [13/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:28.782 [14/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:28.782 [15/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:28.782 [16/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:28.782 [17/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:28.782 [18/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:28.782 [19/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:29.041 [20/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:29.041 [21/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:29.041 [22/737] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.041 [23/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:29.041 [24/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:29.041 [25/737] Linking target lib/librte_kvargs.so.23.0 00:02:29.041 [26/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:29.041 [27/737] Linking static target lib/librte_telemetry.a 00:02:29.041 [28/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:29.041 [29/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:29.041 [30/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:29.041 [31/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:29.041 [32/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:29.300 [33/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:29.300 [34/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:29.300 [35/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:29.300 [36/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:29.300 [37/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:29.300 [38/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:29.300 [39/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:29.300 [40/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:29.300 [41/737] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:29.559 [42/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:29.559 [43/737] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.559 [44/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:29.559 [45/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:29.559 [46/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:29.559 [47/737] Linking target lib/librte_telemetry.so.23.0 00:02:29.559 [48/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:29.559 [49/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:29.559 [50/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:29.559 [51/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:29.559 [52/737] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:29.559 [53/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:29.559 [54/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:29.817 [55/737] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:29.818 [56/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:29.818 [57/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:29.818 [58/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:29.818 [59/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:29.818 [60/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:29.818 [61/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:29.818 [62/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:29.818 [63/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:29.818 [64/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:29.818 [65/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:29.818 [66/737] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:29.818 [67/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:29.818 [68/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:29.818 [69/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:29.818 [70/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:30.076 [71/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:30.076 [72/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:30.076 [73/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:30.076 [74/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:30.076 [75/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:30.076 [76/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:30.076 [77/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:30.076 [78/737] Generating lib/rte_eal_def with a custom command 00:02:30.076 [79/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:30.076 [80/737] Generating lib/rte_eal_mingw with a custom command 00:02:30.076 [81/737] Generating lib/rte_ring_def with a custom command 00:02:30.076 [82/737] Generating lib/rte_rcu_def with a custom command 00:02:30.076 [83/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:30.076 [84/737] Generating lib/rte_ring_mingw with a custom command 00:02:30.076 [85/737] Generating lib/rte_rcu_mingw with a custom command 00:02:30.076 [86/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:30.334 [87/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:30.334 [88/737] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:30.334 [89/737] Linking static target lib/librte_ring.a 00:02:30.334 [90/737] Generating lib/rte_mempool_def with a custom command 00:02:30.334 [91/737] Generating lib/rte_mempool_mingw with a custom command 00:02:30.334 [92/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:30.334 [93/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:30.334 [94/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:30.592 [95/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:30.593 [96/737] Generating lib/rte_mbuf_def with a custom command 00:02:30.593 [97/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:30.593 [98/737] Generating lib/rte_mbuf_mingw with a custom command 00:02:30.593 [99/737] Linking static target lib/librte_eal.a 00:02:30.593 [100/737] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:30.593 [101/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:30.593 [102/737] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.593 [103/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:30.851 [104/737] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:30.851 [105/737] Linking static target lib/librte_rcu.a 00:02:30.851 [106/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:30.851 [107/737] Linking static target lib/librte_mempool.a 00:02:30.851 [108/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:30.851 [109/737] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:30.851 [110/737] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:31.109 [111/737] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:31.109 [112/737] Generating lib/rte_net_def with a custom command 00:02:31.109 [113/737] Generating lib/rte_net_mingw with a custom command 00:02:31.109 [114/737] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:31.109 [115/737] Generating lib/rte_meter_def with a custom command 00:02:31.109 [116/737] Generating lib/rte_meter_mingw with a custom command 00:02:31.109 [117/737] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:31.109 [118/737] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:31.109 [119/737] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:31.109 [120/737] Linking static target lib/librte_meter.a 00:02:31.109 [121/737] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:31.109 [122/737] Linking static target lib/librte_net.a 00:02:31.367 [123/737] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.367 [124/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:31.367 [125/737] Linking static target lib/librte_mbuf.a 00:02:31.367 [126/737] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.367 [127/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:31.624 [128/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:31.624 [129/737] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.624 [130/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:31.624 [131/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:31.624 [132/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:31.881 [133/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:31.881 [134/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:31.881 [135/737] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.138 [136/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:32.138 [137/737] Generating lib/rte_ethdev_def with a custom command 00:02:32.138 [138/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:32.138 [139/737] Generating lib/rte_ethdev_mingw with a custom command 00:02:32.138 [140/737] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.138 [141/737] Generating lib/rte_pci_def with a custom command 00:02:32.138 [142/737] Generating lib/rte_pci_mingw with a custom command 00:02:32.138 [143/737] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:32.138 [144/737] Linking static target lib/librte_pci.a 00:02:32.138 [145/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:32.138 [146/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:32.396 [147/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:32.396 [148/737] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.396 [149/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:32.396 [150/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:32.396 [151/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:32.396 [152/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:32.396 [153/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:32.654 [154/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:32.654 [155/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:32.654 [156/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:32.654 [157/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:32.654 [158/737] Generating lib/rte_cmdline_mingw with a custom command 00:02:32.654 [159/737] Generating lib/rte_cmdline_def with a custom command 00:02:32.654 [160/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:32.654 [161/737] Generating lib/rte_metrics_def with a custom command 00:02:32.654 [162/737] Generating lib/rte_metrics_mingw with a custom command 00:02:32.654 [163/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:32.654 [164/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:32.654 [165/737] Generating lib/rte_hash_def with a custom command 00:02:32.654 [166/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:32.654 [167/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:32.654 [168/737] Generating lib/rte_hash_mingw with a custom command 00:02:32.654 [169/737] Linking static target lib/librte_cmdline.a 00:02:32.654 [170/737] Generating lib/rte_timer_def with a custom command 00:02:32.654 [171/737] Generating lib/rte_timer_mingw with a custom command 00:02:32.912 [172/737] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:32.912 [173/737] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:32.912 [174/737] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:32.912 [175/737] Linking static target lib/librte_metrics.a 00:02:33.170 [176/737] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:33.170 [177/737] Linking static target lib/librte_timer.a 00:02:33.427 [178/737] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:33.427 [179/737] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:33.427 [180/737] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.427 [181/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:33.427 [182/737] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.685 [183/737] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:33.685 [184/737] Generating lib/rte_acl_def with a custom command 00:02:33.685 [185/737] Generating lib/rte_acl_mingw with a custom command 00:02:33.685 [186/737] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:33.685 [187/737] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:33.685 [188/737] Generating lib/rte_bbdev_def with a custom command 00:02:33.685 [189/737] Generating lib/rte_bbdev_mingw with a custom command 00:02:33.685 [190/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:33.944 [191/737] Generating lib/rte_bitratestats_def with a custom command 00:02:33.944 [192/737] Linking static target lib/librte_ethdev.a 00:02:33.944 [193/737] Generating lib/rte_bitratestats_mingw with a custom command 00:02:33.944 [194/737] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.202 [195/737] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:34.202 [196/737] Linking static target lib/librte_bitratestats.a 00:02:34.202 [197/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:34.202 [198/737] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:34.202 [199/737] Linking static target lib/librte_bbdev.a 00:02:34.460 [200/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:34.460 [201/737] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.722 [202/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:34.722 [203/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:34.722 [204/737] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:34.722 [205/737] Linking static target lib/librte_hash.a 00:02:34.979 [206/737] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.979 [207/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:35.236 [208/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:35.236 [209/737] Generating lib/rte_bpf_def with a custom command 00:02:35.236 [210/737] Generating lib/rte_bpf_mingw with a custom command 00:02:35.236 [211/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:35.236 [212/737] Generating lib/rte_cfgfile_def with a custom command 00:02:35.236 [213/737] Generating lib/rte_cfgfile_mingw with a custom command 00:02:35.493 [214/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:35.493 [215/737] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:35.493 [216/737] Linking static target lib/librte_cfgfile.a 00:02:35.493 [217/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:35.493 [218/737] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.493 [219/737] Generating lib/rte_compressdev_def with a custom command 00:02:35.751 [220/737] Generating lib/rte_compressdev_mingw with a custom command 00:02:35.751 [221/737] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.751 [222/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.751 [223/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:35.751 [224/737] Generating lib/rte_cryptodev_def with a custom command 00:02:36.009 [225/737] Generating lib/rte_cryptodev_mingw with a custom command 00:02:36.009 [226/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:36.009 [227/737] Linking static target lib/librte_bpf.a 00:02:36.009 [228/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:36.009 [229/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:36.009 [230/737] Linking static target lib/librte_compressdev.a 00:02:36.009 [231/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:36.009 [232/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:36.009 [233/737] Linking static target lib/librte_acl.a 00:02:36.267 [234/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:36.267 [235/737] Generating lib/rte_distributor_def with a custom command 00:02:36.267 [236/737] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.267 [237/737] Generating lib/rte_distributor_mingw with a custom command 00:02:36.267 [238/737] Generating lib/rte_efd_def with a custom command 00:02:36.267 [239/737] Generating lib/rte_efd_mingw with a custom command 00:02:36.525 [240/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:36.525 [241/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:36.525 [242/737] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.525 [243/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:36.525 [244/737] Linking static target lib/librte_distributor.a 00:02:36.784 [245/737] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:36.784 [246/737] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:36.784 [247/737] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.044 [248/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:37.303 [249/737] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.303 [250/737] Generating lib/rte_eventdev_def with a custom command 00:02:37.303 [251/737] Generating lib/rte_eventdev_mingw with a custom command 00:02:37.303 [252/737] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:37.303 [253/737] Linking static target lib/librte_efd.a 00:02:37.303 [254/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:37.303 [255/737] Generating lib/rte_gpudev_def with a custom command 00:02:37.561 [256/737] Generating lib/rte_gpudev_mingw with a custom command 00:02:37.561 [257/737] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.819 [258/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:37.819 [259/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:37.819 [260/737] Linking static target lib/librte_cryptodev.a 00:02:37.819 [261/737] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:37.819 [262/737] Linking static target lib/librte_gpudev.a 00:02:37.819 [263/737] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:37.819 [264/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:38.077 [265/737] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:38.077 [266/737] Generating lib/rte_gro_def with a custom command 00:02:38.077 [267/737] Generating lib/rte_gro_mingw with a custom command 00:02:38.337 [268/737] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:38.337 [269/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:38.337 [270/737] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:38.337 [271/737] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:38.337 [272/737] Linking static target lib/librte_gro.a 00:02:38.596 [273/737] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:38.596 [274/737] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:38.596 [275/737] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:38.596 [276/737] Generating lib/rte_gso_def with a custom command 00:02:38.596 [277/737] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.596 [278/737] Generating lib/rte_gso_mingw with a custom command 00:02:38.854 [279/737] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:38.854 [280/737] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.854 [281/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:38.854 [282/737] Linking static target lib/librte_eventdev.a 00:02:38.854 [283/737] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:39.113 [284/737] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:39.113 [285/737] Linking static target lib/librte_gso.a 00:02:39.113 [286/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:39.113 [287/737] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.113 [288/737] Generating lib/rte_ip_frag_def with a custom command 00:02:39.372 [289/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:39.372 [290/737] Generating lib/rte_ip_frag_mingw with a custom command 00:02:39.372 [291/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:39.372 [292/737] Generating lib/rte_jobstats_def with a custom command 00:02:39.372 [293/737] Generating lib/rte_jobstats_mingw with a custom command 00:02:39.372 [294/737] Generating lib/rte_latencystats_def with a custom command 00:02:39.372 [295/737] Generating lib/rte_latencystats_mingw with a custom command 00:02:39.372 [296/737] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:39.372 [297/737] Linking static target lib/librte_jobstats.a 00:02:39.372 [298/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:39.372 [299/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:39.632 [300/737] Generating lib/rte_lpm_def with a custom command 00:02:39.632 [301/737] Generating lib/rte_lpm_mingw with a custom command 00:02:39.632 [302/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:39.632 [303/737] Linking static target lib/librte_ip_frag.a 00:02:39.632 [304/737] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.891 [305/737] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:39.891 [306/737] Linking static target lib/librte_latencystats.a 00:02:39.891 [307/737] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.891 [308/737] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:39.891 [309/737] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:39.891 [310/737] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:40.150 [311/737] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:40.150 [312/737] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.150 [313/737] Generating lib/rte_member_def with a custom command 00:02:40.150 [314/737] Generating lib/rte_member_mingw with a custom command 00:02:40.150 [315/737] Generating lib/rte_pcapng_def with a custom command 00:02:40.150 [316/737] Generating lib/rte_pcapng_mingw with a custom command 00:02:40.150 [317/737] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:40.150 [318/737] Linking static target lib/librte_lpm.a 00:02:40.410 [319/737] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:40.410 [320/737] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.410 [321/737] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:40.410 [322/737] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:40.410 [323/737] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:40.669 [324/737] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.669 [325/737] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.669 [326/737] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:40.669 [327/737] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:40.669 [328/737] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:40.669 [329/737] Linking static target lib/librte_pcapng.a 00:02:40.928 [330/737] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.928 [331/737] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:40.928 [332/737] Generating lib/rte_power_def with a custom command 00:02:40.928 [333/737] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:40.928 [334/737] Generating lib/rte_power_mingw with a custom command 00:02:40.928 [335/737] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:40.928 [336/737] Linking target lib/librte_eal.so.23.0 00:02:40.928 [337/737] Generating lib/rte_rawdev_def with a custom command 00:02:40.928 [338/737] Generating lib/rte_rawdev_mingw with a custom command 00:02:40.928 [339/737] Generating lib/rte_regexdev_def with a custom command 00:02:40.928 [340/737] Generating lib/rte_regexdev_mingw with a custom command 00:02:40.928 [341/737] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:40.928 [342/737] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:41.187 [343/737] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:41.187 [344/737] Linking target lib/librte_ring.so.23.0 00:02:41.187 [345/737] Linking target lib/librte_meter.so.23.0 00:02:41.187 [346/737] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.187 [347/737] Linking target lib/librte_pci.so.23.0 00:02:41.187 [348/737] Linking target lib/librte_timer.so.23.0 00:02:41.187 [349/737] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:41.187 [350/737] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:41.187 [351/737] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:41.187 [352/737] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:41.187 [353/737] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:41.187 [354/737] Linking target lib/librte_acl.so.23.0 00:02:41.187 [355/737] Linking target lib/librte_rcu.so.23.0 00:02:41.187 [356/737] Linking target lib/librte_cfgfile.so.23.0 00:02:41.187 [357/737] Linking target lib/librte_jobstats.so.23.0 00:02:41.187 [358/737] Linking static target lib/librte_power.a 00:02:41.187 [359/737] Linking target lib/librte_mempool.so.23.0 00:02:41.187 [360/737] Linking static target lib/librte_rawdev.a 00:02:41.447 [361/737] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:41.447 [362/737] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.447 [363/737] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:41.447 [364/737] Linking static target lib/librte_dmadev.a 00:02:41.447 [365/737] Generating lib/rte_dmadev_mingw with a custom command 00:02:41.447 [366/737] Generating lib/rte_dmadev_def with a custom command 00:02:41.447 [367/737] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:41.447 [368/737] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:41.447 [369/737] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:41.447 [370/737] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:41.447 [371/737] Linking static target lib/librte_regexdev.a 00:02:41.447 [372/737] Generating lib/rte_rib_def with a custom command 00:02:41.447 [373/737] Generating lib/rte_rib_mingw with a custom command 00:02:41.447 [374/737] Generating lib/rte_reorder_def with a custom command 00:02:41.447 [375/737] Linking target lib/librte_mbuf.so.23.0 00:02:41.447 [376/737] Generating lib/rte_reorder_mingw with a custom command 00:02:41.447 [377/737] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:41.447 [378/737] Linking static target lib/librte_member.a 00:02:41.705 [379/737] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:41.705 [380/737] Linking target lib/librte_net.so.23.0 00:02:41.705 [381/737] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:41.963 [382/737] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.963 [383/737] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:41.963 [384/737] Linking target lib/librte_ethdev.so.23.0 00:02:41.963 [385/737] Linking target lib/librte_cmdline.so.23.0 00:02:41.963 [386/737] Linking target lib/librte_hash.so.23.0 00:02:41.963 [387/737] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.963 [388/737] Linking target lib/librte_bbdev.so.23.0 00:02:41.963 [389/737] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:41.963 [390/737] Linking target lib/librte_compressdev.so.23.0 00:02:41.963 [391/737] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.963 [392/737] Linking target lib/librte_cryptodev.so.23.0 00:02:41.963 [393/737] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:41.963 [394/737] Linking target lib/librte_distributor.so.23.0 00:02:41.963 [395/737] Linking target lib/librte_gpudev.so.23.0 00:02:41.964 [396/737] Linking target lib/librte_rawdev.so.23.0 00:02:41.964 [397/737] Linking target lib/librte_metrics.so.23.0 00:02:41.964 [398/737] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:41.964 [399/737] Linking target lib/librte_bpf.so.23.0 00:02:42.222 [400/737] Linking target lib/librte_gro.so.23.0 00:02:42.222 [401/737] Linking target lib/librte_gso.so.23.0 00:02:42.222 [402/737] Linking target lib/librte_efd.so.23.0 00:02:42.222 [403/737] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:42.222 [404/737] Linking target lib/librte_lpm.so.23.0 00:02:42.222 [405/737] Linking target lib/librte_ip_frag.so.23.0 00:02:42.222 [406/737] Linking target lib/librte_member.so.23.0 00:02:42.222 [407/737] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:42.222 [408/737] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:42.222 [409/737] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.222 [410/737] Linking target lib/librte_eventdev.so.23.0 00:02:42.222 [411/737] Linking target lib/librte_pcapng.so.23.0 00:02:42.222 [412/737] Linking target lib/librte_bitratestats.so.23.0 00:02:42.222 [413/737] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:42.222 [414/737] Linking static target lib/librte_rib.a 00:02:42.222 [415/737] Linking target lib/librte_regexdev.so.23.0 00:02:42.222 [416/737] Linking target lib/librte_latencystats.so.23.0 00:02:42.222 [417/737] Linking static target lib/librte_reorder.a 00:02:42.222 [418/737] Linking target lib/librte_dmadev.so.23.0 00:02:42.222 [419/737] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:42.222 [420/737] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:42.223 [421/737] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:42.223 [422/737] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:42.482 [423/737] Generating lib/rte_sched_def with a custom command 00:02:42.482 [424/737] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:42.482 [425/737] Generating lib/rte_sched_mingw with a custom command 00:02:42.482 [426/737] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:42.482 [427/737] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:42.482 [428/737] Generating lib/rte_security_def with a custom command 00:02:42.482 [429/737] Generating lib/rte_security_mingw with a custom command 00:02:42.482 [430/737] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:42.482 [431/737] Generating lib/rte_stack_def with a custom command 00:02:42.482 [432/737] Generating lib/rte_stack_mingw with a custom command 00:02:42.482 [433/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:42.482 [434/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:42.482 [435/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:42.482 [436/737] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.482 [437/737] Linking static target lib/librte_stack.a 00:02:42.482 [438/737] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.482 [439/737] Linking target lib/librte_reorder.so.23.0 00:02:42.482 [440/737] Linking target lib/librte_power.so.23.0 00:02:42.482 [441/737] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:42.741 [442/737] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.741 [443/737] Linking target lib/librte_stack.so.23.0 00:02:42.741 [444/737] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.000 [445/737] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:43.000 [446/737] Linking target lib/librte_rib.so.23.0 00:02:43.000 [447/737] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:43.000 [448/737] Generating lib/rte_vhost_def with a custom command 00:02:43.000 [449/737] Generating lib/rte_vhost_mingw with a custom command 00:02:43.000 [450/737] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:43.000 [451/737] Linking static target lib/librte_security.a 00:02:43.000 [452/737] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:43.000 [453/737] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:43.000 [454/737] Linking static target lib/librte_sched.a 00:02:43.000 [455/737] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:43.568 [456/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:43.568 [457/737] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:43.568 [458/737] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.568 [459/737] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.568 [460/737] Linking target lib/librte_security.so.23.0 00:02:43.568 [461/737] Linking target lib/librte_sched.so.23.0 00:02:43.568 [462/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:43.568 [463/737] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:43.568 [464/737] Generating lib/rte_ipsec_def with a custom command 00:02:43.568 [465/737] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:43.568 [466/737] Generating lib/rte_ipsec_mingw with a custom command 00:02:43.827 [467/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:43.827 [468/737] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:43.827 [469/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:44.087 [470/737] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:44.087 [471/737] Generating lib/rte_fib_def with a custom command 00:02:44.087 [472/737] Generating lib/rte_fib_mingw with a custom command 00:02:44.087 [473/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:44.347 [474/737] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:44.347 [475/737] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:44.347 [476/737] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:44.347 [477/737] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:44.347 [478/737] Linking static target lib/librte_ipsec.a 00:02:44.606 [479/737] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:44.606 [480/737] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:44.606 [481/737] Linking static target lib/librte_fib.a 00:02:44.606 [482/737] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:44.865 [483/737] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:44.865 [484/737] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:44.865 [485/737] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:44.865 [486/737] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.865 [487/737] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:44.865 [488/737] Linking target lib/librte_ipsec.so.23.0 00:02:45.122 [489/737] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.122 [490/737] Linking target lib/librte_fib.so.23.0 00:02:45.381 [491/737] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:45.381 [492/737] Generating lib/rte_port_def with a custom command 00:02:45.381 [493/737] Generating lib/rte_port_mingw with a custom command 00:02:45.381 [494/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:45.381 [495/737] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:45.381 [496/737] Generating lib/rte_pdump_def with a custom command 00:02:45.381 [497/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:45.381 [498/737] Generating lib/rte_pdump_mingw with a custom command 00:02:45.639 [499/737] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:45.639 [500/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:45.639 [501/737] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:45.639 [502/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:45.639 [503/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:45.897 [504/737] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:45.897 [505/737] Linking static target lib/librte_port.a 00:02:45.897 [506/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:45.897 [507/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:46.155 [508/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:46.155 [509/737] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:46.155 [510/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:46.155 [511/737] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:46.155 [512/737] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:46.155 [513/737] Linking static target lib/librte_pdump.a 00:02:46.721 [514/737] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.721 [515/737] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:46.721 [516/737] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.721 [517/737] Linking target lib/librte_pdump.so.23.0 00:02:46.721 [518/737] Linking target lib/librte_port.so.23.0 00:02:46.721 [519/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:46.721 [520/737] Generating lib/rte_table_def with a custom command 00:02:46.721 [521/737] Generating lib/rte_table_mingw with a custom command 00:02:46.721 [522/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:46.721 [523/737] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:46.979 [524/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:46.979 [525/737] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:46.979 [526/737] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:46.979 [527/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:46.979 [528/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:47.237 [529/737] Linking static target lib/librte_table.a 00:02:47.237 [530/737] Generating lib/rte_pipeline_def with a custom command 00:02:47.237 [531/737] Generating lib/rte_pipeline_mingw with a custom command 00:02:47.237 [532/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:47.495 [533/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:47.495 [534/737] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:47.496 [535/737] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:47.753 [536/737] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:48.012 [537/737] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:48.012 [538/737] Generating lib/rte_graph_def with a custom command 00:02:48.012 [539/737] Generating lib/rte_graph_mingw with a custom command 00:02:48.012 [540/737] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:48.012 [541/737] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.012 [542/737] Linking target lib/librte_table.so.23.0 00:02:48.270 [543/737] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:48.270 [544/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:48.270 [545/737] Linking static target lib/librte_graph.a 00:02:48.270 [546/737] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:48.270 [547/737] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:48.529 [548/737] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:48.529 [549/737] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:48.529 [550/737] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:48.787 [551/737] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:49.046 [552/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:49.046 [553/737] Generating lib/rte_node_def with a custom command 00:02:49.046 [554/737] Generating lib/rte_node_mingw with a custom command 00:02:49.046 [555/737] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:49.046 [556/737] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:49.046 [557/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:49.046 [558/737] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.046 [559/737] Linking target lib/librte_graph.so.23.0 00:02:49.305 [560/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:49.305 [561/737] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:49.305 [562/737] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:49.305 [563/737] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:49.305 [564/737] Linking static target lib/librte_node.a 00:02:49.305 [565/737] Generating drivers/rte_bus_pci_def with a custom command 00:02:49.305 [566/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:49.305 [567/737] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:49.305 [568/737] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:49.565 [569/737] Generating drivers/rte_bus_vdev_def with a custom command 00:02:49.565 [570/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.565 [571/737] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:49.565 [572/737] Generating drivers/rte_mempool_ring_def with a custom command 00:02:49.565 [573/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:49.565 [574/737] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:49.565 [575/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:49.565 [576/737] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:49.565 [577/737] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:49.565 [578/737] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.565 [579/737] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:49.565 [580/737] Linking target lib/librte_node.so.23.0 00:02:49.824 [581/737] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:49.824 [582/737] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:49.824 [583/737] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:49.824 [584/737] Linking static target drivers/librte_bus_vdev.a 00:02:49.824 [585/737] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.824 [586/737] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.824 [587/737] Linking static target drivers/librte_bus_pci.a 00:02:50.083 [588/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.083 [589/737] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.083 [590/737] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.083 [591/737] Linking target drivers/librte_bus_vdev.so.23.0 00:02:50.083 [592/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:50.083 [593/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:50.083 [594/737] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:50.394 [595/737] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.394 [596/737] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:50.394 [597/737] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:50.394 [598/737] Linking target drivers/librte_bus_pci.so.23.0 00:02:50.394 [599/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:50.394 [600/737] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:50.394 [601/737] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.394 [602/737] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:50.394 [603/737] Linking static target drivers/librte_mempool_ring.a 00:02:50.394 [604/737] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.394 [605/737] Linking target drivers/librte_mempool_ring.so.23.0 00:02:50.652 [606/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:50.653 [607/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:51.220 [608/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:51.220 [609/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:51.479 [610/737] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:51.736 [611/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:51.993 [612/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:51.993 [613/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:51.993 [614/737] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:51.993 [615/737] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:52.558 [616/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:52.558 [617/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:52.558 [618/737] Generating drivers/rte_net_i40e_def with a custom command 00:02:52.558 [619/737] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:52.558 [620/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:52.815 [621/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:53.380 [622/737] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:53.380 [623/737] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:53.380 [624/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:53.380 [625/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:53.380 [626/737] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:53.380 [627/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:53.639 [628/737] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:53.639 [629/737] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:53.639 [630/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:53.897 [631/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:54.154 [632/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:54.154 [633/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:54.154 [634/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:54.412 [635/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:54.412 [636/737] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:54.412 [637/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:54.668 [638/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:54.668 [639/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:54.668 [640/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:54.668 [641/737] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:54.668 [642/737] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:54.668 [643/737] Linking static target drivers/librte_net_i40e.a 00:02:54.926 [644/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:54.926 [645/737] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:54.926 [646/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:54.926 [647/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:55.184 [648/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:55.184 [649/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:55.442 [650/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:55.442 [651/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:55.442 [652/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:55.442 [653/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:55.442 [654/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:55.699 [655/737] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.699 [656/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:55.699 [657/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:55.699 [658/737] Linking target drivers/librte_net_i40e.so.23.0 00:02:55.699 [659/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:55.699 [660/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:55.699 [661/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:55.956 [662/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:56.214 [663/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:56.214 [664/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:56.214 [665/737] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:56.472 [666/737] Linking static target lib/librte_vhost.a 00:02:56.472 [667/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:56.472 [668/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:56.805 [669/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:57.062 [670/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:57.063 [671/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:57.063 [672/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:57.063 [673/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:57.063 [674/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:57.063 [675/737] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:57.320 [676/737] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:57.579 [677/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:57.579 [678/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:57.579 [679/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:57.579 [680/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:57.579 [681/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:57.838 [682/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:57.838 [683/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:57.838 [684/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:57.838 [685/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:58.096 [686/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:58.096 [687/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:58.096 [688/737] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:58.355 [689/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:58.355 [690/737] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.355 [691/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:58.355 [692/737] Linking target lib/librte_vhost.so.23.0 00:02:58.613 [693/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:58.613 [694/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:58.871 [695/737] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:58.871 [696/737] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:59.130 [697/737] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:59.130 [698/737] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:59.130 [699/737] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:59.698 [700/737] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:59.698 [701/737] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:59.698 [702/737] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:59.698 [703/737] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:59.698 [704/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:59.957 [705/737] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:59.957 [706/737] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:00.216 [707/737] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:00.476 [708/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:00.476 [709/737] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:00.476 [710/737] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:00.735 [711/737] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:00.735 [712/737] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:00.735 [713/737] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:00.735 [714/737] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:00.735 [715/737] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:01.672 [716/737] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:01.672 [717/737] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:04.372 [718/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:04.631 [719/737] Linking static target lib/librte_pipeline.a 00:03:04.891 [720/737] Linking target app/dpdk-proc-info 00:03:04.891 [721/737] Linking target app/dpdk-test-acl 00:03:04.891 [722/737] Linking target app/dpdk-test-cmdline 00:03:04.891 [723/737] Linking target app/dpdk-test-compress-perf 00:03:04.891 [724/737] Linking target app/dpdk-pdump 00:03:05.150 [725/737] Linking target app/dpdk-test-bbdev 00:03:05.150 [726/737] Linking target app/dpdk-test-fib 00:03:05.150 [727/737] Linking target app/dpdk-test-eventdev 00:03:05.150 [728/737] Linking target app/dpdk-test-crypto-perf 00:03:05.409 [729/737] Linking target app/dpdk-test-gpudev 00:03:05.409 [730/737] Linking target app/dpdk-test-flow-perf 00:03:05.409 [731/737] Linking target app/dpdk-test-pipeline 00:03:05.409 [732/737] Linking target app/dpdk-test-regex 00:03:05.669 [733/737] Linking target app/dpdk-testpmd 00:03:05.669 [734/737] Linking target app/dpdk-test-security-perf 00:03:05.669 [735/737] Linking target app/dpdk-test-sad 00:03:08.958 [736/737] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.217 [737/737] Linking target lib/librte_pipeline.so.23.0 00:03:09.217 16:20:40 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:09.217 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:09.217 [0/1] Installing files. 00:03:09.477 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.478 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.479 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.480 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.481 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:09.740 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.740 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.740 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.740 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.740 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.740 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.740 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.741 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.741 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.741 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.741 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.741 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.741 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.741 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.741 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.741 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.741 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.741 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.741 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.741 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.741 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:09.741 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:10.003 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.003 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:10.003 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.004 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:10.004 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.004 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:10.004 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.004 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.005 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.006 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:10.007 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:10.007 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:10.007 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:10.007 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:10.007 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:10.007 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:10.007 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:10.007 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:10.007 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:10.007 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:10.007 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:10.007 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:10.007 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:10.007 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:10.007 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:10.007 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:10.007 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:10.007 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:10.007 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:10.007 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:10.007 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:10.007 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:10.007 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:10.007 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:10.007 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:10.007 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:10.007 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:10.007 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:10.007 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:10.007 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:10.007 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:10.007 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:10.007 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:10.008 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:10.008 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:10.008 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:10.008 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:10.008 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:10.008 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:10.008 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:10.008 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:10.008 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:10.008 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:10.008 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:10.008 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:10.008 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:10.008 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:10.008 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:10.008 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:10.008 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:10.008 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:10.008 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:10.267 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:10.267 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:10.267 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:10.267 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:10.267 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:10.267 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:10.267 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:10.267 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:10.267 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:10.267 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:10.267 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:10.267 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:10.267 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:10.267 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:10.267 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:10.267 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:10.267 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:10.267 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:10.267 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:10.267 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:10.267 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:10.267 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:10.267 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:10.267 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:10.267 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:10.267 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:10.267 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:10.267 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:10.267 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:10.267 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:10.267 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:10.267 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:10.267 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:10.267 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:10.267 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:10.267 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:10.267 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:10.267 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:10.267 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:10.267 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:10.267 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:10.267 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:10.267 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:10.267 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:10.267 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:10.267 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:10.267 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:10.267 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:10.267 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:10.267 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:10.267 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:10.267 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:10.267 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:10.267 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:10.267 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:10.267 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:10.267 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:10.267 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:10.267 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:10.268 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:10.268 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:10.268 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:10.268 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:10.268 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:10.268 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:10.268 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:10.268 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:10.268 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:10.268 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:10.268 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:10.268 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:10.268 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:10.268 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:10.268 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:10.268 16:20:41 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:10.268 16:20:41 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:10.268 16:20:41 -- common/autobuild_common.sh@200 -- $ cat 00:03:10.268 16:20:41 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:10.268 ************************************ 00:03:10.268 END TEST build_native_dpdk 00:03:10.268 ************************************ 00:03:10.268 00:03:10.268 real 0m49.180s 00:03:10.268 user 4m40.615s 00:03:10.268 sys 0m55.628s 00:03:10.268 16:20:41 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:10.268 16:20:41 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.268 16:20:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:10.268 16:20:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:10.268 16:20:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:10.268 16:20:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:10.268 16:20:41 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:03:10.268 16:20:41 -- spdk/autobuild.sh@58 -- $ unittest_build 00:03:10.268 16:20:41 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:03:10.268 16:20:41 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:03:10.268 16:20:41 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:10.268 16:20:41 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.268 ************************************ 00:03:10.268 START TEST unittest_build 00:03:10.268 ************************************ 00:03:10.268 16:20:41 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:03:10.268 16:20:41 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared 00:03:10.268 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:10.527 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.527 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:10.527 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:10.785 Using 'verbs' RDMA provider 00:03:29.524 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:44.435 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:44.435 Creating mk/config.mk...done. 00:03:44.435 Creating mk/cc.flags.mk...done. 00:03:44.435 Type 'make' to build. 00:03:44.435 16:21:13 -- common/autobuild_common.sh@403 -- $ make -j10 00:03:44.435 make[1]: Nothing to be done for 'all'. 00:04:02.562 CC lib/ut_mock/mock.o 00:04:02.562 CC lib/ut/ut.o 00:04:02.562 CC lib/log/log.o 00:04:02.562 CC lib/log/log_flags.o 00:04:02.562 CC lib/log/log_deprecated.o 00:04:02.562 LIB libspdk_ut_mock.a 00:04:02.562 LIB libspdk_log.a 00:04:02.562 LIB libspdk_ut.a 00:04:02.562 CC lib/ioat/ioat.o 00:04:02.562 CC lib/dma/dma.o 00:04:02.562 CC lib/util/base64.o 00:04:02.562 CC lib/util/bit_array.o 00:04:02.562 CC lib/util/crc16.o 00:04:02.562 CC lib/util/cpuset.o 00:04:02.563 CC lib/util/crc32.o 00:04:02.563 CXX lib/trace_parser/trace.o 00:04:02.563 CC lib/util/crc32c.o 00:04:02.563 CC lib/vfio_user/host/vfio_user_pci.o 00:04:02.563 CC lib/vfio_user/host/vfio_user.o 00:04:02.563 CC lib/util/crc32_ieee.o 00:04:02.563 CC lib/util/crc64.o 00:04:02.563 CC lib/util/dif.o 00:04:02.563 LIB libspdk_dma.a 00:04:02.563 CC lib/util/fd.o 00:04:02.563 CC lib/util/file.o 00:04:02.563 CC lib/util/hexlify.o 00:04:02.563 CC lib/util/iov.o 00:04:02.563 CC lib/util/math.o 00:04:02.563 LIB libspdk_ioat.a 00:04:02.563 CC lib/util/pipe.o 00:04:02.563 CC lib/util/strerror_tls.o 00:04:02.563 LIB libspdk_vfio_user.a 00:04:02.563 CC lib/util/string.o 00:04:02.563 CC lib/util/uuid.o 00:04:02.563 CC lib/util/fd_group.o 00:04:02.563 CC lib/util/xor.o 00:04:02.563 CC lib/util/zipf.o 00:04:02.563 LIB libspdk_util.a 00:04:02.563 CC lib/json/json_util.o 00:04:02.563 CC lib/json/json_write.o 00:04:02.563 CC lib/json/json_parse.o 00:04:02.563 CC lib/conf/conf.o 00:04:02.563 CC lib/rdma/common.o 00:04:02.563 CC lib/rdma/rdma_verbs.o 00:04:02.563 CC lib/env_dpdk/env.o 00:04:02.563 LIB libspdk_trace_parser.a 00:04:02.563 CC lib/idxd/idxd.o 00:04:02.563 CC lib/vmd/vmd.o 00:04:02.563 CC lib/vmd/led.o 00:04:02.563 CC lib/env_dpdk/memory.o 00:04:02.563 CC lib/env_dpdk/pci.o 00:04:02.563 LIB libspdk_conf.a 00:04:02.563 CC lib/env_dpdk/init.o 00:04:02.563 CC lib/env_dpdk/threads.o 00:04:02.563 CC lib/env_dpdk/pci_ioat.o 00:04:02.563 LIB libspdk_json.a 00:04:02.563 LIB libspdk_rdma.a 00:04:02.821 CC lib/idxd/idxd_user.o 00:04:02.821 CC lib/env_dpdk/pci_virtio.o 00:04:02.821 CC lib/env_dpdk/pci_vmd.o 00:04:02.821 CC lib/jsonrpc/jsonrpc_server.o 00:04:02.821 CC lib/env_dpdk/pci_idxd.o 00:04:02.821 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:02.821 CC lib/env_dpdk/pci_event.o 00:04:02.821 CC lib/env_dpdk/sigbus_handler.o 00:04:02.821 CC lib/env_dpdk/pci_dpdk.o 00:04:02.821 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:02.821 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:03.080 CC lib/jsonrpc/jsonrpc_client.o 00:04:03.080 LIB libspdk_idxd.a 00:04:03.080 LIB libspdk_vmd.a 00:04:03.080 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:03.338 LIB libspdk_jsonrpc.a 00:04:03.338 CC lib/rpc/rpc.o 00:04:03.596 LIB libspdk_rpc.a 00:04:03.596 LIB libspdk_env_dpdk.a 00:04:03.855 CC lib/notify/notify.o 00:04:03.855 CC lib/notify/notify_rpc.o 00:04:03.855 CC lib/trace/trace.o 00:04:03.855 CC lib/trace/trace_rpc.o 00:04:03.855 CC lib/sock/sock.o 00:04:03.855 CC lib/trace/trace_flags.o 00:04:03.855 CC lib/sock/sock_rpc.o 00:04:03.855 LIB libspdk_notify.a 00:04:04.114 LIB libspdk_trace.a 00:04:04.114 LIB libspdk_sock.a 00:04:04.114 CC lib/thread/iobuf.o 00:04:04.114 CC lib/thread/thread.o 00:04:04.373 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:04.373 CC lib/nvme/nvme_ctrlr.o 00:04:04.373 CC lib/nvme/nvme_ns_cmd.o 00:04:04.373 CC lib/nvme/nvme_fabric.o 00:04:04.373 CC lib/nvme/nvme_pcie.o 00:04:04.373 CC lib/nvme/nvme_ns.o 00:04:04.373 CC lib/nvme/nvme_qpair.o 00:04:04.373 CC lib/nvme/nvme_pcie_common.o 00:04:04.373 CC lib/nvme/nvme.o 00:04:04.939 CC lib/nvme/nvme_quirks.o 00:04:04.939 CC lib/nvme/nvme_transport.o 00:04:04.939 CC lib/nvme/nvme_discovery.o 00:04:04.939 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:05.202 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:05.202 CC lib/nvme/nvme_tcp.o 00:04:05.202 CC lib/nvme/nvme_opal.o 00:04:05.202 CC lib/nvme/nvme_io_msg.o 00:04:05.202 CC lib/nvme/nvme_poll_group.o 00:04:05.462 CC lib/nvme/nvme_zns.o 00:04:05.462 CC lib/nvme/nvme_cuse.o 00:04:05.462 CC lib/nvme/nvme_vfio_user.o 00:04:05.462 CC lib/nvme/nvme_rdma.o 00:04:05.718 LIB libspdk_thread.a 00:04:05.976 CC lib/blob/blobstore.o 00:04:05.976 CC lib/blob/request.o 00:04:05.976 CC lib/blob/zeroes.o 00:04:05.976 CC lib/accel/accel.o 00:04:05.976 CC lib/init/json_config.o 00:04:05.976 CC lib/virtio/virtio.o 00:04:05.976 CC lib/virtio/virtio_vhost_user.o 00:04:06.234 CC lib/virtio/virtio_vfio_user.o 00:04:06.234 CC lib/init/subsystem.o 00:04:06.234 CC lib/blob/blob_bs_dev.o 00:04:06.234 CC lib/init/subsystem_rpc.o 00:04:06.234 CC lib/virtio/virtio_pci.o 00:04:06.234 CC lib/accel/accel_rpc.o 00:04:06.234 CC lib/accel/accel_sw.o 00:04:06.490 CC lib/init/rpc.o 00:04:06.490 LIB libspdk_init.a 00:04:06.490 LIB libspdk_virtio.a 00:04:06.748 LIB libspdk_nvme.a 00:04:06.748 CC lib/event/reactor.o 00:04:06.748 CC lib/event/app.o 00:04:06.748 CC lib/event/app_rpc.o 00:04:06.748 CC lib/event/scheduler_static.o 00:04:06.748 CC lib/event/log_rpc.o 00:04:07.007 LIB libspdk_accel.a 00:04:07.007 CC lib/bdev/bdev.o 00:04:07.007 CC lib/bdev/bdev_rpc.o 00:04:07.007 CC lib/bdev/part.o 00:04:07.007 CC lib/bdev/bdev_zone.o 00:04:07.007 CC lib/bdev/scsi_nvme.o 00:04:07.007 LIB libspdk_event.a 00:04:08.910 LIB libspdk_blob.a 00:04:09.168 CC lib/blobfs/tree.o 00:04:09.168 CC lib/blobfs/blobfs.o 00:04:09.168 CC lib/lvol/lvol.o 00:04:09.734 LIB libspdk_bdev.a 00:04:09.734 CC lib/scsi/dev.o 00:04:09.734 CC lib/scsi/lun.o 00:04:09.734 CC lib/scsi/port.o 00:04:09.734 CC lib/scsi/scsi.o 00:04:09.734 CC lib/scsi/scsi_bdev.o 00:04:09.734 CC lib/nbd/nbd.o 00:04:09.734 CC lib/nvmf/ctrlr.o 00:04:09.992 CC lib/ftl/ftl_core.o 00:04:09.992 LIB libspdk_blobfs.a 00:04:09.993 LIB libspdk_lvol.a 00:04:09.993 CC lib/nvmf/ctrlr_discovery.o 00:04:09.993 CC lib/nvmf/ctrlr_bdev.o 00:04:09.993 CC lib/nvmf/subsystem.o 00:04:09.993 CC lib/nvmf/nvmf.o 00:04:10.276 CC lib/nvmf/nvmf_rpc.o 00:04:10.276 CC lib/nvmf/transport.o 00:04:10.276 CC lib/nbd/nbd_rpc.o 00:04:10.276 CC lib/scsi/scsi_pr.o 00:04:10.276 CC lib/ftl/ftl_init.o 00:04:10.577 LIB libspdk_nbd.a 00:04:10.577 CC lib/scsi/scsi_rpc.o 00:04:10.577 CC lib/nvmf/tcp.o 00:04:10.577 CC lib/ftl/ftl_layout.o 00:04:10.578 CC lib/scsi/task.o 00:04:10.578 CC lib/nvmf/rdma.o 00:04:10.837 CC lib/ftl/ftl_debug.o 00:04:10.837 CC lib/ftl/ftl_io.o 00:04:10.837 LIB libspdk_scsi.a 00:04:10.837 CC lib/ftl/ftl_sb.o 00:04:10.837 CC lib/ftl/ftl_l2p.o 00:04:10.837 CC lib/ftl/ftl_l2p_flat.o 00:04:10.837 CC lib/ftl/ftl_nv_cache.o 00:04:10.837 CC lib/ftl/ftl_band.o 00:04:11.095 CC lib/ftl/ftl_band_ops.o 00:04:11.095 CC lib/ftl/ftl_writer.o 00:04:11.095 CC lib/ftl/ftl_rq.o 00:04:11.095 CC lib/vhost/vhost.o 00:04:11.095 CC lib/iscsi/conn.o 00:04:11.353 CC lib/iscsi/init_grp.o 00:04:11.353 CC lib/ftl/ftl_reloc.o 00:04:11.353 CC lib/iscsi/iscsi.o 00:04:11.353 CC lib/iscsi/md5.o 00:04:11.609 CC lib/iscsi/param.o 00:04:11.609 CC lib/iscsi/portal_grp.o 00:04:11.609 CC lib/vhost/vhost_rpc.o 00:04:11.866 CC lib/vhost/vhost_scsi.o 00:04:11.866 CC lib/iscsi/tgt_node.o 00:04:11.866 CC lib/ftl/ftl_l2p_cache.o 00:04:11.866 CC lib/iscsi/iscsi_subsystem.o 00:04:11.866 CC lib/iscsi/iscsi_rpc.o 00:04:11.866 CC lib/iscsi/task.o 00:04:12.122 CC lib/vhost/vhost_blk.o 00:04:12.122 CC lib/vhost/rte_vhost_user.o 00:04:12.122 CC lib/ftl/ftl_p2l.o 00:04:12.122 CC lib/ftl/mngt/ftl_mngt.o 00:04:12.380 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:12.380 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:12.380 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:12.380 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:12.380 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:12.380 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:12.638 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:12.638 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:12.638 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:12.638 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:12.638 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:12.638 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:12.638 CC lib/ftl/utils/ftl_conf.o 00:04:12.638 CC lib/ftl/utils/ftl_md.o 00:04:12.895 LIB libspdk_iscsi.a 00:04:12.895 CC lib/ftl/utils/ftl_mempool.o 00:04:12.896 CC lib/ftl/utils/ftl_bitmap.o 00:04:12.896 CC lib/ftl/utils/ftl_property.o 00:04:12.896 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:12.896 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:12.896 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:12.896 LIB libspdk_nvmf.a 00:04:12.896 LIB libspdk_vhost.a 00:04:12.896 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:12.896 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:12.896 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:13.153 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:13.153 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:13.153 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:13.153 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:13.153 CC lib/ftl/base/ftl_base_dev.o 00:04:13.154 CC lib/ftl/base/ftl_base_bdev.o 00:04:13.154 CC lib/ftl/ftl_trace.o 00:04:13.411 LIB libspdk_ftl.a 00:04:13.976 CC module/env_dpdk/env_dpdk_rpc.o 00:04:13.976 CC module/accel/dsa/accel_dsa.o 00:04:13.976 CC module/accel/error/accel_error.o 00:04:13.976 CC module/accel/iaa/accel_iaa.o 00:04:13.976 CC module/scheduler/gscheduler/gscheduler.o 00:04:13.976 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:13.976 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:13.976 CC module/accel/ioat/accel_ioat.o 00:04:13.976 CC module/blob/bdev/blob_bdev.o 00:04:13.976 CC module/sock/posix/posix.o 00:04:13.976 LIB libspdk_env_dpdk_rpc.a 00:04:13.976 LIB libspdk_scheduler_dpdk_governor.a 00:04:13.976 CC module/accel/iaa/accel_iaa_rpc.o 00:04:13.976 CC module/accel/error/accel_error_rpc.o 00:04:13.976 CC module/accel/ioat/accel_ioat_rpc.o 00:04:13.976 CC module/accel/dsa/accel_dsa_rpc.o 00:04:13.976 LIB libspdk_scheduler_dynamic.a 00:04:13.976 LIB libspdk_scheduler_gscheduler.a 00:04:14.234 LIB libspdk_blob_bdev.a 00:04:14.234 LIB libspdk_accel_iaa.a 00:04:14.234 LIB libspdk_accel_error.a 00:04:14.234 LIB libspdk_accel_ioat.a 00:04:14.234 LIB libspdk_accel_dsa.a 00:04:14.234 CC module/bdev/gpt/gpt.o 00:04:14.234 CC module/bdev/error/vbdev_error.o 00:04:14.234 CC module/bdev/delay/vbdev_delay.o 00:04:14.234 CC module/bdev/lvol/vbdev_lvol.o 00:04:14.234 CC module/blobfs/bdev/blobfs_bdev.o 00:04:14.234 CC module/bdev/malloc/bdev_malloc.o 00:04:14.234 CC module/bdev/null/bdev_null.o 00:04:14.234 CC module/bdev/passthru/vbdev_passthru.o 00:04:14.234 CC module/bdev/nvme/bdev_nvme.o 00:04:14.491 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:14.491 CC module/bdev/gpt/vbdev_gpt.o 00:04:14.491 CC module/bdev/error/vbdev_error_rpc.o 00:04:14.491 CC module/bdev/null/bdev_null_rpc.o 00:04:14.750 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:14.750 LIB libspdk_blobfs_bdev.a 00:04:14.750 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:14.750 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:14.750 LIB libspdk_bdev_error.a 00:04:14.750 LIB libspdk_sock_posix.a 00:04:14.750 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:14.750 LIB libspdk_bdev_gpt.a 00:04:14.750 CC module/bdev/nvme/nvme_rpc.o 00:04:14.750 LIB libspdk_bdev_null.a 00:04:14.750 CC module/bdev/nvme/bdev_mdns_client.o 00:04:14.750 LIB libspdk_bdev_passthru.a 00:04:14.750 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:14.750 LIB libspdk_bdev_malloc.a 00:04:14.750 LIB libspdk_bdev_delay.a 00:04:14.750 CC module/bdev/nvme/vbdev_opal.o 00:04:15.008 CC module/bdev/raid/bdev_raid.o 00:04:15.008 CC module/bdev/raid/bdev_raid_rpc.o 00:04:15.008 CC module/bdev/raid/bdev_raid_sb.o 00:04:15.008 CC module/bdev/raid/raid0.o 00:04:15.008 CC module/bdev/split/vbdev_split.o 00:04:15.008 CC module/bdev/split/vbdev_split_rpc.o 00:04:15.008 CC module/bdev/raid/raid1.o 00:04:15.008 CC module/bdev/raid/concat.o 00:04:15.266 LIB libspdk_bdev_lvol.a 00:04:15.266 CC module/bdev/raid/raid5f.o 00:04:15.266 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:15.266 LIB libspdk_bdev_split.a 00:04:15.266 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:15.266 CC module/bdev/aio/bdev_aio.o 00:04:15.266 CC module/bdev/aio/bdev_aio_rpc.o 00:04:15.266 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:15.266 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:15.266 CC module/bdev/ftl/bdev_ftl.o 00:04:15.525 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:15.525 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:15.525 CC module/bdev/iscsi/bdev_iscsi.o 00:04:15.525 LIB libspdk_bdev_zone_block.a 00:04:15.525 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:15.525 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:15.525 LIB libspdk_bdev_aio.a 00:04:15.784 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:15.784 LIB libspdk_bdev_ftl.a 00:04:15.784 LIB libspdk_bdev_raid.a 00:04:16.043 LIB libspdk_bdev_iscsi.a 00:04:16.043 LIB libspdk_bdev_virtio.a 00:04:16.611 LIB libspdk_bdev_nvme.a 00:04:16.870 CC module/event/subsystems/iobuf/iobuf.o 00:04:16.870 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:16.870 CC module/event/subsystems/sock/sock.o 00:04:16.870 CC module/event/subsystems/vmd/vmd.o 00:04:16.870 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:16.870 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:16.870 CC module/event/subsystems/scheduler/scheduler.o 00:04:17.129 LIB libspdk_event_vhost_blk.a 00:04:17.129 LIB libspdk_event_sock.a 00:04:17.129 LIB libspdk_event_scheduler.a 00:04:17.129 LIB libspdk_event_vmd.a 00:04:17.129 LIB libspdk_event_iobuf.a 00:04:17.388 CC module/event/subsystems/accel/accel.o 00:04:17.388 LIB libspdk_event_accel.a 00:04:17.647 CC module/event/subsystems/bdev/bdev.o 00:04:17.906 LIB libspdk_event_bdev.a 00:04:18.165 CC module/event/subsystems/scsi/scsi.o 00:04:18.165 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:18.165 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:18.165 CC module/event/subsystems/nbd/nbd.o 00:04:18.165 LIB libspdk_event_scsi.a 00:04:18.165 LIB libspdk_event_nbd.a 00:04:18.424 LIB libspdk_event_nvmf.a 00:04:18.424 CC module/event/subsystems/iscsi/iscsi.o 00:04:18.424 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:18.683 LIB libspdk_event_vhost_scsi.a 00:04:18.683 LIB libspdk_event_iscsi.a 00:04:18.943 CC app/trace_record/trace_record.o 00:04:18.943 TEST_HEADER include/spdk/accel.h 00:04:18.943 TEST_HEADER include/spdk/accel_module.h 00:04:18.943 CXX app/trace/trace.o 00:04:18.943 TEST_HEADER include/spdk/assert.h 00:04:18.943 TEST_HEADER include/spdk/barrier.h 00:04:18.943 TEST_HEADER include/spdk/base64.h 00:04:18.943 TEST_HEADER include/spdk/bdev.h 00:04:18.943 TEST_HEADER include/spdk/bdev_module.h 00:04:18.943 TEST_HEADER include/spdk/bdev_zone.h 00:04:18.943 TEST_HEADER include/spdk/bit_array.h 00:04:18.943 TEST_HEADER include/spdk/bit_pool.h 00:04:18.943 TEST_HEADER include/spdk/blob.h 00:04:18.943 TEST_HEADER include/spdk/blob_bdev.h 00:04:18.943 TEST_HEADER include/spdk/blobfs.h 00:04:18.943 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:18.943 TEST_HEADER include/spdk/conf.h 00:04:18.943 TEST_HEADER include/spdk/config.h 00:04:18.943 TEST_HEADER include/spdk/cpuset.h 00:04:18.943 CC examples/accel/perf/accel_perf.o 00:04:18.943 TEST_HEADER include/spdk/crc16.h 00:04:18.943 TEST_HEADER include/spdk/crc32.h 00:04:18.943 TEST_HEADER include/spdk/crc64.h 00:04:18.943 TEST_HEADER include/spdk/dif.h 00:04:18.943 TEST_HEADER include/spdk/dma.h 00:04:18.943 TEST_HEADER include/spdk/endian.h 00:04:18.943 TEST_HEADER include/spdk/env.h 00:04:18.943 TEST_HEADER include/spdk/env_dpdk.h 00:04:18.943 TEST_HEADER include/spdk/event.h 00:04:18.943 CC test/dma/test_dma/test_dma.o 00:04:18.943 TEST_HEADER include/spdk/fd.h 00:04:18.943 CC test/bdev/bdevio/bdevio.o 00:04:18.943 TEST_HEADER include/spdk/fd_group.h 00:04:18.943 TEST_HEADER include/spdk/file.h 00:04:18.943 TEST_HEADER include/spdk/ftl.h 00:04:18.943 TEST_HEADER include/spdk/gpt_spec.h 00:04:18.943 CC test/blobfs/mkfs/mkfs.o 00:04:18.943 TEST_HEADER include/spdk/hexlify.h 00:04:18.943 TEST_HEADER include/spdk/histogram_data.h 00:04:18.943 TEST_HEADER include/spdk/idxd.h 00:04:18.943 TEST_HEADER include/spdk/idxd_spec.h 00:04:18.943 CC test/accel/dif/dif.o 00:04:18.943 CC test/env/mem_callbacks/mem_callbacks.o 00:04:18.943 TEST_HEADER include/spdk/init.h 00:04:18.943 TEST_HEADER include/spdk/ioat.h 00:04:18.943 TEST_HEADER include/spdk/ioat_spec.h 00:04:18.943 CC test/app/bdev_svc/bdev_svc.o 00:04:18.943 TEST_HEADER include/spdk/iscsi_spec.h 00:04:18.943 TEST_HEADER include/spdk/json.h 00:04:18.943 TEST_HEADER include/spdk/jsonrpc.h 00:04:18.943 TEST_HEADER include/spdk/likely.h 00:04:18.943 TEST_HEADER include/spdk/log.h 00:04:18.943 TEST_HEADER include/spdk/lvol.h 00:04:18.943 TEST_HEADER include/spdk/memory.h 00:04:18.943 TEST_HEADER include/spdk/mmio.h 00:04:18.943 TEST_HEADER include/spdk/nbd.h 00:04:18.943 TEST_HEADER include/spdk/notify.h 00:04:18.943 TEST_HEADER include/spdk/nvme.h 00:04:18.943 TEST_HEADER include/spdk/nvme_intel.h 00:04:18.943 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:18.943 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:18.943 TEST_HEADER include/spdk/nvme_spec.h 00:04:18.943 TEST_HEADER include/spdk/nvme_zns.h 00:04:18.943 TEST_HEADER include/spdk/nvmf.h 00:04:18.943 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:18.943 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:18.943 TEST_HEADER include/spdk/nvmf_spec.h 00:04:18.944 TEST_HEADER include/spdk/nvmf_transport.h 00:04:18.944 TEST_HEADER include/spdk/opal.h 00:04:18.944 TEST_HEADER include/spdk/opal_spec.h 00:04:18.944 TEST_HEADER include/spdk/pci_ids.h 00:04:18.944 TEST_HEADER include/spdk/pipe.h 00:04:18.944 TEST_HEADER include/spdk/queue.h 00:04:18.944 TEST_HEADER include/spdk/reduce.h 00:04:18.944 TEST_HEADER include/spdk/rpc.h 00:04:18.944 TEST_HEADER include/spdk/scheduler.h 00:04:18.944 TEST_HEADER include/spdk/scsi.h 00:04:19.203 TEST_HEADER include/spdk/scsi_spec.h 00:04:19.203 TEST_HEADER include/spdk/sock.h 00:04:19.203 TEST_HEADER include/spdk/stdinc.h 00:04:19.203 TEST_HEADER include/spdk/string.h 00:04:19.203 TEST_HEADER include/spdk/thread.h 00:04:19.203 TEST_HEADER include/spdk/trace.h 00:04:19.203 TEST_HEADER include/spdk/trace_parser.h 00:04:19.203 TEST_HEADER include/spdk/tree.h 00:04:19.203 TEST_HEADER include/spdk/ublk.h 00:04:19.203 TEST_HEADER include/spdk/util.h 00:04:19.203 TEST_HEADER include/spdk/uuid.h 00:04:19.203 TEST_HEADER include/spdk/version.h 00:04:19.203 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:19.203 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:19.203 TEST_HEADER include/spdk/vhost.h 00:04:19.203 TEST_HEADER include/spdk/vmd.h 00:04:19.203 TEST_HEADER include/spdk/xor.h 00:04:19.204 TEST_HEADER include/spdk/zipf.h 00:04:19.204 CXX test/cpp_headers/accel.o 00:04:19.204 LINK bdev_svc 00:04:19.204 LINK mkfs 00:04:19.204 LINK mem_callbacks 00:04:19.204 LINK spdk_trace_record 00:04:19.204 LINK spdk_trace 00:04:19.204 CXX test/cpp_headers/accel_module.o 00:04:19.464 LINK test_dma 00:04:19.464 LINK accel_perf 00:04:19.464 LINK bdevio 00:04:19.464 CXX test/cpp_headers/assert.o 00:04:19.464 LINK dif 00:04:19.723 CXX test/cpp_headers/barrier.o 00:04:19.723 CC test/env/vtophys/vtophys.o 00:04:19.723 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:19.723 CC app/nvmf_tgt/nvmf_main.o 00:04:19.723 CXX test/cpp_headers/base64.o 00:04:19.723 LINK vtophys 00:04:19.723 LINK env_dpdk_post_init 00:04:19.983 CXX test/cpp_headers/bdev.o 00:04:19.983 LINK nvmf_tgt 00:04:19.983 CXX test/cpp_headers/bdev_module.o 00:04:20.242 CXX test/cpp_headers/bdev_zone.o 00:04:20.502 CC examples/bdev/hello_world/hello_bdev.o 00:04:20.502 CXX test/cpp_headers/bit_array.o 00:04:20.502 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:20.502 CXX test/cpp_headers/bit_pool.o 00:04:20.762 LINK hello_bdev 00:04:20.762 CXX test/cpp_headers/blob.o 00:04:20.762 CXX test/cpp_headers/blob_bdev.o 00:04:21.065 CC test/env/memory/memory_ut.o 00:04:21.065 LINK nvme_fuzz 00:04:21.065 CXX test/cpp_headers/blobfs.o 00:04:21.325 CXX test/cpp_headers/blobfs_bdev.o 00:04:21.325 CXX test/cpp_headers/conf.o 00:04:21.325 LINK memory_ut 00:04:21.583 CXX test/cpp_headers/config.o 00:04:21.583 CXX test/cpp_headers/cpuset.o 00:04:21.583 CC test/env/pci/pci_ut.o 00:04:21.583 CXX test/cpp_headers/crc16.o 00:04:21.841 CXX test/cpp_headers/crc32.o 00:04:21.841 CXX test/cpp_headers/crc64.o 00:04:21.841 LINK pci_ut 00:04:22.098 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:22.098 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:22.098 CXX test/cpp_headers/dif.o 00:04:22.098 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:22.098 CC test/app/histogram_perf/histogram_perf.o 00:04:22.357 CXX test/cpp_headers/dma.o 00:04:22.357 CC examples/bdev/bdevperf/bdevperf.o 00:04:22.357 CC test/event/reactor/reactor.o 00:04:22.357 CC test/event/event_perf/event_perf.o 00:04:22.357 LINK histogram_perf 00:04:22.357 CXX test/cpp_headers/endian.o 00:04:22.357 CXX test/cpp_headers/env.o 00:04:22.615 LINK reactor 00:04:22.615 LINK event_perf 00:04:22.615 LINK vhost_fuzz 00:04:22.615 CXX test/cpp_headers/env_dpdk.o 00:04:22.615 CC test/lvol/esnap/esnap.o 00:04:22.874 CXX test/cpp_headers/event.o 00:04:22.874 CXX test/cpp_headers/fd.o 00:04:22.874 LINK bdevperf 00:04:23.133 CC test/nvme/aer/aer.o 00:04:23.133 CXX test/cpp_headers/fd_group.o 00:04:23.133 CC test/rpc_client/rpc_client_test.o 00:04:23.133 CC test/event/reactor_perf/reactor_perf.o 00:04:23.392 CXX test/cpp_headers/file.o 00:04:23.392 LINK aer 00:04:23.392 CC test/event/app_repeat/app_repeat.o 00:04:23.392 LINK rpc_client_test 00:04:23.392 LINK reactor_perf 00:04:23.392 CXX test/cpp_headers/ftl.o 00:04:23.392 CC examples/blob/hello_world/hello_blob.o 00:04:23.392 CC app/iscsi_tgt/iscsi_tgt.o 00:04:23.651 LINK app_repeat 00:04:23.651 CXX test/cpp_headers/gpt_spec.o 00:04:23.651 LINK iscsi_fuzz 00:04:23.651 LINK iscsi_tgt 00:04:23.651 LINK hello_blob 00:04:23.909 CXX test/cpp_headers/hexlify.o 00:04:23.909 CXX test/cpp_headers/histogram_data.o 00:04:23.909 CC app/spdk_tgt/spdk_tgt.o 00:04:23.909 CXX test/cpp_headers/idxd.o 00:04:24.168 CXX test/cpp_headers/idxd_spec.o 00:04:24.168 LINK spdk_tgt 00:04:24.168 CXX test/cpp_headers/init.o 00:04:24.168 CC examples/ioat/perf/perf.o 00:04:24.427 CXX test/cpp_headers/ioat.o 00:04:24.427 CC test/nvme/reset/reset.o 00:04:24.427 LINK ioat_perf 00:04:24.427 CXX test/cpp_headers/ioat_spec.o 00:04:24.686 CC test/event/scheduler/scheduler.o 00:04:24.686 CXX test/cpp_headers/iscsi_spec.o 00:04:24.686 LINK reset 00:04:24.686 CC test/app/jsoncat/jsoncat.o 00:04:24.686 LINK scheduler 00:04:24.686 LINK jsoncat 00:04:24.686 CXX test/cpp_headers/json.o 00:04:24.945 CXX test/cpp_headers/jsonrpc.o 00:04:24.945 CC examples/ioat/verify/verify.o 00:04:25.203 CXX test/cpp_headers/likely.o 00:04:25.203 LINK verify 00:04:25.203 CXX test/cpp_headers/log.o 00:04:25.462 CXX test/cpp_headers/lvol.o 00:04:25.462 CC test/app/stub/stub.o 00:04:25.462 CXX test/cpp_headers/memory.o 00:04:25.720 LINK stub 00:04:25.720 CXX test/cpp_headers/mmio.o 00:04:25.720 CC test/nvme/sgl/sgl.o 00:04:25.720 CXX test/cpp_headers/nbd.o 00:04:25.720 CXX test/cpp_headers/notify.o 00:04:25.977 CC examples/nvme/hello_world/hello_world.o 00:04:25.977 CXX test/cpp_headers/nvme.o 00:04:25.977 LINK sgl 00:04:26.233 LINK hello_world 00:04:26.233 CXX test/cpp_headers/nvme_intel.o 00:04:26.233 CXX test/cpp_headers/nvme_ocssd.o 00:04:26.491 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:26.491 CXX test/cpp_headers/nvme_spec.o 00:04:26.491 CC examples/blob/cli/blobcli.o 00:04:26.749 CXX test/cpp_headers/nvme_zns.o 00:04:26.749 CC examples/sock/hello_world/hello_sock.o 00:04:26.749 CXX test/cpp_headers/nvmf.o 00:04:26.749 CC app/spdk_lspci/spdk_lspci.o 00:04:27.007 LINK hello_sock 00:04:27.007 CC app/spdk_nvme_perf/perf.o 00:04:27.007 CXX test/cpp_headers/nvmf_cmd.o 00:04:27.007 LINK spdk_lspci 00:04:27.007 LINK blobcli 00:04:27.007 CC test/nvme/e2edp/nvme_dp.o 00:04:27.265 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:27.265 CC examples/nvme/reconnect/reconnect.o 00:04:27.265 CXX test/cpp_headers/nvmf_spec.o 00:04:27.523 LINK nvme_dp 00:04:27.523 LINK esnap 00:04:27.524 CXX test/cpp_headers/nvmf_transport.o 00:04:27.524 LINK reconnect 00:04:27.782 CXX test/cpp_headers/opal.o 00:04:27.782 CC app/spdk_nvme_identify/identify.o 00:04:27.782 CC app/spdk_nvme_discover/discovery_aer.o 00:04:27.782 LINK spdk_nvme_perf 00:04:27.782 CXX test/cpp_headers/opal_spec.o 00:04:28.039 LINK spdk_nvme_discover 00:04:28.039 CXX test/cpp_headers/pci_ids.o 00:04:28.039 CXX test/cpp_headers/pipe.o 00:04:28.039 CXX test/cpp_headers/queue.o 00:04:28.297 CXX test/cpp_headers/reduce.o 00:04:28.297 CC app/spdk_top/spdk_top.o 00:04:28.297 CXX test/cpp_headers/rpc.o 00:04:28.297 CC app/vhost/vhost.o 00:04:28.555 CXX test/cpp_headers/scheduler.o 00:04:28.555 CC test/nvme/overhead/overhead.o 00:04:28.813 LINK spdk_nvme_identify 00:04:28.813 LINK vhost 00:04:28.813 CXX test/cpp_headers/scsi.o 00:04:28.813 CC app/spdk_dd/spdk_dd.o 00:04:28.813 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:28.813 CXX test/cpp_headers/scsi_spec.o 00:04:28.813 LINK overhead 00:04:28.813 CXX test/cpp_headers/sock.o 00:04:29.070 CXX test/cpp_headers/stdinc.o 00:04:29.070 CC app/fio/nvme/fio_plugin.o 00:04:29.070 LINK spdk_dd 00:04:29.070 CXX test/cpp_headers/string.o 00:04:29.327 CXX test/cpp_headers/thread.o 00:04:29.327 CXX test/cpp_headers/trace.o 00:04:29.327 CXX test/cpp_headers/trace_parser.o 00:04:29.585 LINK spdk_top 00:04:29.585 LINK nvme_manage 00:04:29.843 CXX test/cpp_headers/tree.o 00:04:29.843 LINK spdk_nvme 00:04:29.843 CC app/fio/bdev/fio_plugin.o 00:04:29.843 CXX test/cpp_headers/ublk.o 00:04:29.843 CXX test/cpp_headers/util.o 00:04:30.138 CXX test/cpp_headers/uuid.o 00:04:30.138 CC examples/vmd/lsvmd/lsvmd.o 00:04:30.138 CC examples/nvmf/nvmf/nvmf.o 00:04:30.138 CXX test/cpp_headers/version.o 00:04:30.138 CC test/nvme/err_injection/err_injection.o 00:04:30.138 LINK lsvmd 00:04:30.138 CXX test/cpp_headers/vfio_user_pci.o 00:04:30.138 CC test/nvme/startup/startup.o 00:04:30.402 LINK spdk_bdev 00:04:30.402 CXX test/cpp_headers/vfio_user_spec.o 00:04:30.402 LINK err_injection 00:04:30.402 LINK startup 00:04:30.402 LINK nvmf 00:04:30.402 CXX test/cpp_headers/vhost.o 00:04:30.660 CXX test/cpp_headers/vmd.o 00:04:30.918 CXX test/cpp_headers/xor.o 00:04:30.918 CC examples/nvme/arbitration/arbitration.o 00:04:30.918 CXX test/cpp_headers/zipf.o 00:04:31.177 CC examples/nvme/hotplug/hotplug.o 00:04:31.177 CC test/thread/poller_perf/poller_perf.o 00:04:31.177 LINK arbitration 00:04:31.177 CC examples/vmd/led/led.o 00:04:31.177 LINK hotplug 00:04:31.436 LINK poller_perf 00:04:31.436 LINK led 00:04:31.436 CC test/thread/lock/spdk_lock.o 00:04:31.695 CC test/nvme/reserve/reserve.o 00:04:31.953 LINK reserve 00:04:31.953 CC test/nvme/simple_copy/simple_copy.o 00:04:32.211 CC test/nvme/connect_stress/connect_stress.o 00:04:32.211 CC test/nvme/boot_partition/boot_partition.o 00:04:32.211 LINK simple_copy 00:04:32.211 LINK connect_stress 00:04:32.211 CC test/nvme/compliance/nvme_compliance.o 00:04:32.469 LINK boot_partition 00:04:32.469 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:32.469 CC test/nvme/fused_ordering/fused_ordering.o 00:04:32.728 LINK cmb_copy 00:04:32.728 LINK nvme_compliance 00:04:32.728 LINK fused_ordering 00:04:32.986 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:32.986 LINK spdk_lock 00:04:33.244 LINK doorbell_aers 00:04:33.244 CC test/nvme/fdp/fdp.o 00:04:33.244 CC test/nvme/cuse/cuse.o 00:04:33.501 CC examples/util/zipf/zipf.o 00:04:33.760 LINK fdp 00:04:33.760 LINK zipf 00:04:33.760 CC examples/thread/thread/thread_ex.o 00:04:33.760 CC test/unit/lib/accel/accel.c/accel_ut.o 00:04:33.760 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:04:33.760 CC examples/nvme/abort/abort.o 00:04:33.760 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:34.020 LINK thread 00:04:34.020 LINK histogram_ut 00:04:34.020 CC examples/idxd/perf/perf.o 00:04:34.020 LINK pmr_persistence 00:04:34.020 LINK cuse 00:04:34.279 LINK abort 00:04:34.279 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:34.279 LINK idxd_perf 00:04:34.279 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:04:34.279 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:04:34.538 LINK interrupt_tgt 00:04:34.538 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:04:34.538 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:04:34.798 LINK tree_ut 00:04:35.056 CC test/unit/lib/event/app.c/app_ut.o 00:04:35.056 CC test/unit/lib/dma/dma.c/dma_ut.o 00:04:35.056 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:04:35.056 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:04:35.314 LINK blob_bdev_ut 00:04:35.314 LINK dma_ut 00:04:35.572 CC test/unit/lib/blob/blob.c/blob_ut.o 00:04:35.572 LINK ioat_ut 00:04:35.572 LINK app_ut 00:04:35.572 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:04:35.830 LINK blobfs_async_ut 00:04:35.830 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:04:35.830 LINK accel_ut 00:04:35.830 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:04:36.089 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:04:36.089 LINK conn_ut 00:04:36.089 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:04:36.348 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:04:36.348 LINK json_util_ut 00:04:36.348 LINK reactor_ut 00:04:36.348 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:04:36.606 CC test/unit/lib/log/log.c/log_ut.o 00:04:36.606 LINK jsonrpc_server_ut 00:04:36.865 LINK json_write_ut 00:04:36.865 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:04:36.865 LINK init_grp_ut 00:04:36.865 LINK log_ut 00:04:36.865 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:04:36.865 CC test/unit/lib/bdev/part.c/part_ut.o 00:04:36.865 LINK blobfs_bdev_ut 00:04:36.865 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:04:37.124 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:04:37.124 CC test/unit/lib/notify/notify.c/notify_ut.o 00:04:37.124 LINK blobfs_sync_ut 00:04:37.124 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:04:37.383 LINK scsi_nvme_ut 00:04:37.383 LINK notify_ut 00:04:37.383 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:04:37.641 LINK gpt_ut 00:04:37.641 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:04:37.641 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:04:37.900 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:04:38.158 LINK json_parse_ut 00:04:38.158 LINK bdev_raid_sb_ut 00:04:38.158 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:04:38.417 LINK vbdev_lvol_ut 00:04:38.417 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:04:38.417 LINK lvol_ut 00:04:38.675 LINK concat_ut 00:04:38.675 CC test/unit/lib/iscsi/param.c/param_ut.o 00:04:38.675 LINK bdev_zone_ut 00:04:38.675 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:04:38.934 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:04:38.934 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:04:39.193 LINK iscsi_ut 00:04:39.193 LINK param_ut 00:04:39.193 LINK bdev_ut 00:04:39.453 LINK bdev_raid_ut 00:04:39.453 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:04:39.453 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:04:39.453 LINK vbdev_zone_block_ut 00:04:39.712 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:04:39.712 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:04:39.712 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:04:39.712 LINK nvme_ut 00:04:39.970 LINK dev_ut 00:04:39.970 LINK part_ut 00:04:39.970 LINK portal_grp_ut 00:04:40.227 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:04:40.227 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:04:40.227 LINK raid1_ut 00:04:40.227 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:04:40.511 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:04:40.511 CC test/unit/lib/sock/sock.c/sock_ut.o 00:04:40.511 LINK tgt_node_ut 00:04:40.783 LINK lun_ut 00:04:40.783 LINK raid5f_ut 00:04:40.783 LINK bdev_ut 00:04:41.041 CC test/unit/lib/thread/thread.c/thread_ut.o 00:04:41.041 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:04:41.300 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:04:41.300 LINK scsi_ut 00:04:41.300 CC test/unit/lib/util/base64.c/base64_ut.o 00:04:41.300 LINK nvme_ctrlr_cmd_ut 00:04:41.559 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:04:41.559 LINK base64_ut 00:04:41.559 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:04:41.815 LINK blob_ut 00:04:41.815 LINK iobuf_ut 00:04:41.815 LINK sock_ut 00:04:42.073 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:04:42.073 LINK bit_array_ut 00:04:42.329 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:04:42.329 LINK pci_event_ut 00:04:42.329 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:04:42.329 CC test/unit/lib/sock/posix.c/posix_ut.o 00:04:42.329 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:04:42.329 LINK scsi_bdev_ut 00:04:42.329 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:04:42.329 LINK crc16_ut 00:04:42.586 LINK cpuset_ut 00:04:42.586 LINK crc32_ieee_ut 00:04:42.586 LINK subsystem_ut 00:04:42.586 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:04:42.586 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:04:42.845 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:04:42.845 LINK nvme_ctrlr_ut 00:04:42.845 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:04:42.845 LINK tcp_ut 00:04:43.104 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:04:43.104 LINK thread_ut 00:04:43.104 LINK crc32c_ut 00:04:43.104 LINK crc64_ut 00:04:43.104 CC test/unit/lib/util/dif.c/dif_ut.o 00:04:43.104 LINK scsi_pr_ut 00:04:43.104 LINK posix_ut 00:04:43.363 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:04:43.363 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:04:43.363 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:04:43.363 LINK nvme_ns_ut 00:04:43.363 LINK nvme_ctrlr_ocssd_cmd_ut 00:04:43.363 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:04:43.363 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:04:43.363 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:04:43.623 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:04:43.623 LINK rpc_ut 00:04:43.623 CC test/unit/lib/rdma/common.c/common_ut.o 00:04:43.882 LINK idxd_user_ut 00:04:43.882 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:04:44.141 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:04:44.141 LINK common_ut 00:04:44.141 LINK dif_ut 00:04:44.141 LINK idxd_ut 00:04:44.400 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:04:44.400 CC test/unit/lib/util/iov.c/iov_ut.o 00:04:44.400 CC test/unit/lib/util/math.c/math_ut.o 00:04:44.400 LINK bdev_nvme_ut 00:04:44.659 LINK iov_ut 00:04:44.659 LINK math_ut 00:04:44.918 LINK nvme_ns_cmd_ut 00:04:44.918 LINK ctrlr_bdev_ut 00:04:44.918 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:04:44.918 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:04:44.918 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:04:44.918 CC test/unit/lib/util/string.c/string_ut.o 00:04:45.177 CC test/unit/lib/util/xor.c/xor_ut.o 00:04:45.177 LINK vhost_ut 00:04:45.177 LINK nvmf_ut 00:04:45.177 LINK subsystem_ut 00:04:45.435 LINK string_ut 00:04:45.435 LINK pipe_ut 00:04:45.435 LINK xor_ut 00:04:45.435 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:04:45.435 LINK ctrlr_discovery_ut 00:04:45.435 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:04:45.694 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:04:45.694 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:04:45.694 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:04:45.694 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:04:45.694 LINK ctrlr_ut 00:04:45.694 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:04:45.953 LINK ftl_l2p_ut 00:04:46.212 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:04:46.212 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:04:46.212 LINK nvme_ns_ocssd_cmd_ut 00:04:46.212 LINK nvme_poll_group_ut 00:04:46.212 LINK ftl_bitmap_ut 00:04:46.212 LINK ftl_io_ut 00:04:46.470 LINK ftl_mempool_ut 00:04:46.470 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:04:46.470 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:04:46.470 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:04:46.470 LINK nvme_qpair_ut 00:04:46.470 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:04:46.729 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:04:46.729 LINK ftl_band_ut 00:04:46.729 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:04:46.988 LINK nvme_quirks_ut 00:04:46.989 LINK nvme_pcie_ut 00:04:46.989 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:04:46.989 LINK ftl_mngt_ut 00:04:46.989 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:04:47.247 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:04:47.247 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:04:47.505 LINK nvme_transport_ut 00:04:47.505 LINK nvme_io_msg_ut 00:04:47.505 LINK ftl_layout_upgrade_ut 00:04:47.764 LINK ftl_sb_ut 00:04:47.764 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:04:47.764 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:04:47.764 LINK nvme_opal_ut 00:04:48.022 LINK nvme_fabric_ut 00:04:48.022 LINK transport_ut 00:04:48.022 LINK rdma_ut 00:04:48.281 LINK nvme_pcie_common_ut 00:04:48.847 LINK nvme_tcp_ut 00:04:48.847 LINK nvme_cuse_ut 00:04:49.442 LINK nvme_rdma_ut 00:04:49.723 00:04:49.723 real 1m39.295s 00:04:49.723 user 7m41.290s 00:04:49.723 sys 1m54.110s 00:04:49.723 16:22:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:49.723 ************************************ 00:04:49.723 END TEST unittest_build 00:04:49.723 16:22:20 -- common/autotest_common.sh@10 -- $ set +x 00:04:49.723 ************************************ 00:04:49.723 16:22:21 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:49.723 16:22:21 -- nvmf/common.sh@7 -- # uname -s 00:04:49.723 16:22:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.723 16:22:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.723 16:22:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.723 16:22:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.723 16:22:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.723 16:22:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.723 16:22:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.723 16:22:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.723 16:22:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.723 16:22:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.723 16:22:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13499af3-90dd-48a8-9f5d-49ca841e247a 00:04:49.723 16:22:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=13499af3-90dd-48a8-9f5d-49ca841e247a 00:04:49.723 16:22:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.723 16:22:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.723 16:22:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.723 16:22:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:49.723 16:22:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.723 16:22:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.723 16:22:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.724 16:22:21 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:49.724 16:22:21 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:49.724 16:22:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:49.724 16:22:21 -- paths/export.sh@5 -- # export PATH 00:04:49.724 16:22:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:49.724 16:22:21 -- nvmf/common.sh@46 -- # : 0 00:04:49.724 16:22:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:49.724 16:22:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:49.724 16:22:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:49.724 16:22:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.724 16:22:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.724 16:22:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:49.724 16:22:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:49.724 16:22:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:49.724 16:22:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:49.724 16:22:21 -- spdk/autotest.sh@32 -- # uname -s 00:04:49.724 16:22:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:49.724 16:22:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:49.724 16:22:21 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:49.724 16:22:21 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:49.724 16:22:21 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:49.724 16:22:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:49.724 16:22:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:49.724 16:22:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:49.724 16:22:21 -- spdk/autotest.sh@48 -- # udevadm_pid=104030 00:04:49.724 16:22:21 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:49.724 16:22:21 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:49.724 16:22:21 -- spdk/autotest.sh@54 -- # echo 104055 00:04:49.724 16:22:21 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:49.724 16:22:21 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:49.724 16:22:21 -- spdk/autotest.sh@56 -- # echo 104065 00:04:49.724 16:22:21 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:49.724 16:22:21 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:49.724 16:22:21 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:49.724 16:22:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:49.724 16:22:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.724 16:22:21 -- spdk/autotest.sh@70 -- # create_test_list 00:04:49.724 16:22:21 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:49.724 16:22:21 -- common/autotest_common.sh@10 -- # set +x 00:04:49.724 16:22:21 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:49.724 16:22:21 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:49.724 16:22:21 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:49.724 16:22:21 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:49.724 16:22:21 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:49.724 16:22:21 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:49.724 16:22:21 -- common/autotest_common.sh@1440 -- # uname 00:04:49.724 16:22:21 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:49.724 16:22:21 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:49.724 16:22:21 -- common/autotest_common.sh@1460 -- # uname 00:04:49.724 16:22:21 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:49.724 16:22:21 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:49.724 16:22:21 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:49.724 16:22:21 -- spdk/autotest.sh@83 -- # hash lcov 00:04:49.724 16:22:21 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:49.724 16:22:21 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:49.724 --rc lcov_branch_coverage=1 00:04:49.724 --rc lcov_function_coverage=1 00:04:49.724 --rc genhtml_branch_coverage=1 00:04:49.724 --rc genhtml_function_coverage=1 00:04:49.724 --rc genhtml_legend=1 00:04:49.724 --rc geninfo_all_blocks=1 00:04:49.724 ' 00:04:49.724 16:22:21 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:49.724 --rc lcov_branch_coverage=1 00:04:49.724 --rc lcov_function_coverage=1 00:04:49.724 --rc genhtml_branch_coverage=1 00:04:49.724 --rc genhtml_function_coverage=1 00:04:49.724 --rc genhtml_legend=1 00:04:49.724 --rc geninfo_all_blocks=1 00:04:49.724 ' 00:04:49.724 16:22:21 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:49.724 --rc lcov_branch_coverage=1 00:04:49.724 --rc lcov_function_coverage=1 00:04:49.724 --rc genhtml_branch_coverage=1 00:04:49.724 --rc genhtml_function_coverage=1 00:04:49.724 --rc genhtml_legend=1 00:04:49.724 --rc geninfo_all_blocks=1 00:04:49.724 --no-external' 00:04:49.724 16:22:21 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:49.724 --rc lcov_branch_coverage=1 00:04:49.724 --rc lcov_function_coverage=1 00:04:49.724 --rc genhtml_branch_coverage=1 00:04:49.724 --rc genhtml_function_coverage=1 00:04:49.724 --rc genhtml_legend=1 00:04:49.724 --rc geninfo_all_blocks=1 00:04:49.724 --no-external' 00:04:49.724 16:22:21 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:49.982 lcov: LCOV version 1.15 00:04:49.982 16:22:21 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:08.061 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:08.061 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:08.061 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:08.061 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:08.061 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:08.061 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:34.605 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:34.605 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:34.606 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:34.606 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:34.607 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:34.607 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:34.607 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:34.607 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:34.607 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:34.607 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:34.607 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:34.607 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:34.866 16:23:06 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:34.866 16:23:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:34.866 16:23:06 -- common/autotest_common.sh@10 -- # set +x 00:05:34.866 16:23:06 -- spdk/autotest.sh@102 -- # rm -f 00:05:34.866 16:23:06 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:35.432 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:35.433 16:23:06 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:35.433 16:23:06 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:35.433 16:23:06 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:35.433 16:23:06 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:35.433 16:23:06 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:35.433 16:23:06 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:35.433 16:23:06 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:35.433 16:23:06 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:35.433 16:23:06 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:35.433 16:23:06 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:35.433 16:23:06 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:05:35.433 16:23:06 -- spdk/autotest.sh@121 -- # grep -v p 00:05:35.433 16:23:06 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:35.433 16:23:06 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:35.433 16:23:06 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:35.433 16:23:06 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:35.433 16:23:06 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:35.433 No valid GPT data, bailing 00:05:35.433 16:23:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:35.433 16:23:06 -- scripts/common.sh@393 -- # pt= 00:05:35.433 16:23:06 -- scripts/common.sh@394 -- # return 1 00:05:35.433 16:23:06 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:35.433 1+0 records in 00:05:35.433 1+0 records out 00:05:35.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630115 s, 166 MB/s 00:05:35.433 16:23:06 -- spdk/autotest.sh@129 -- # sync 00:05:35.433 16:23:06 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:35.433 16:23:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:35.433 16:23:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:37.337 16:23:08 -- spdk/autotest.sh@135 -- # uname -s 00:05:37.337 16:23:08 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:37.337 16:23:08 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:37.337 16:23:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.337 16:23:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.337 16:23:08 -- common/autotest_common.sh@10 -- # set +x 00:05:37.337 ************************************ 00:05:37.337 START TEST setup.sh 00:05:37.337 ************************************ 00:05:37.337 16:23:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:37.337 * Looking for test storage... 00:05:37.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:37.337 16:23:08 -- setup/test-setup.sh@10 -- # uname -s 00:05:37.337 16:23:08 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:37.337 16:23:08 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:37.337 16:23:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.337 16:23:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.337 16:23:08 -- common/autotest_common.sh@10 -- # set +x 00:05:37.337 ************************************ 00:05:37.337 START TEST acl 00:05:37.337 ************************************ 00:05:37.337 16:23:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:37.337 * Looking for test storage... 00:05:37.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:37.337 16:23:08 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:37.337 16:23:08 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:37.337 16:23:08 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:37.337 16:23:08 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:37.337 16:23:08 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:37.337 16:23:08 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:37.337 16:23:08 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:37.337 16:23:08 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:37.337 16:23:08 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:37.337 16:23:08 -- setup/acl.sh@12 -- # devs=() 00:05:37.337 16:23:08 -- setup/acl.sh@12 -- # declare -a devs 00:05:37.337 16:23:08 -- setup/acl.sh@13 -- # drivers=() 00:05:37.337 16:23:08 -- setup/acl.sh@13 -- # declare -A drivers 00:05:37.337 16:23:08 -- setup/acl.sh@51 -- # setup reset 00:05:37.337 16:23:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:37.337 16:23:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.905 16:23:09 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:37.905 16:23:09 -- setup/acl.sh@16 -- # local dev driver 00:05:37.905 16:23:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:37.905 16:23:09 -- setup/acl.sh@15 -- # setup output status 00:05:37.905 16:23:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.905 16:23:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:38.162 Hugepages 00:05:38.162 node hugesize free / total 00:05:38.162 16:23:09 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:38.162 16:23:09 -- setup/acl.sh@19 -- # continue 00:05:38.162 16:23:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:38.162 00:05:38.162 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:38.162 16:23:09 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:38.162 16:23:09 -- setup/acl.sh@19 -- # continue 00:05:38.163 16:23:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:38.163 16:23:09 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:38.163 16:23:09 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:38.163 16:23:09 -- setup/acl.sh@20 -- # continue 00:05:38.163 16:23:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:38.420 16:23:09 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:38.420 16:23:09 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:38.420 16:23:09 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:38.420 16:23:09 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:38.420 16:23:09 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:38.420 16:23:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:38.420 16:23:09 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:38.420 16:23:09 -- setup/acl.sh@54 -- # run_test denied denied 00:05:38.420 16:23:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.420 16:23:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.420 16:23:09 -- common/autotest_common.sh@10 -- # set +x 00:05:38.420 ************************************ 00:05:38.420 START TEST denied 00:05:38.420 ************************************ 00:05:38.420 16:23:09 -- common/autotest_common.sh@1104 -- # denied 00:05:38.420 16:23:09 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:38.420 16:23:09 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:38.420 16:23:09 -- setup/acl.sh@38 -- # setup output config 00:05:38.420 16:23:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.420 16:23:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:40.951 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:40.951 16:23:11 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:40.951 16:23:11 -- setup/acl.sh@28 -- # local dev driver 00:05:40.951 16:23:11 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:40.951 16:23:11 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:40.951 16:23:11 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:40.951 16:23:11 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:40.951 16:23:11 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:40.951 16:23:11 -- setup/acl.sh@41 -- # setup reset 00:05:40.951 16:23:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:40.951 16:23:11 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.211 00:05:41.211 real 0m2.792s 00:05:41.211 user 0m0.528s 00:05:41.211 sys 0m2.331s 00:05:41.211 16:23:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.211 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:05:41.211 ************************************ 00:05:41.211 END TEST denied 00:05:41.211 ************************************ 00:05:41.211 16:23:12 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:41.211 16:23:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.211 16:23:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.211 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:05:41.211 ************************************ 00:05:41.211 START TEST allowed 00:05:41.211 ************************************ 00:05:41.211 16:23:12 -- common/autotest_common.sh@1104 -- # allowed 00:05:41.211 16:23:12 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:41.211 16:23:12 -- setup/acl.sh@45 -- # setup output config 00:05:41.211 16:23:12 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:41.211 16:23:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.211 16:23:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:43.114 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.114 16:23:14 -- setup/acl.sh@47 -- # verify 00:05:43.114 16:23:14 -- setup/acl.sh@28 -- # local dev driver 00:05:43.114 16:23:14 -- setup/acl.sh@48 -- # setup reset 00:05:43.114 16:23:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:43.114 16:23:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:43.372 00:05:43.372 real 0m2.141s 00:05:43.372 user 0m0.488s 00:05:43.372 sys 0m1.668s 00:05:43.372 ************************************ 00:05:43.372 END TEST allowed 00:05:43.372 ************************************ 00:05:43.372 16:23:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.372 16:23:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.372 ************************************ 00:05:43.372 END TEST acl 00:05:43.372 ************************************ 00:05:43.372 00:05:43.372 real 0m6.164s 00:05:43.372 user 0m1.573s 00:05:43.372 sys 0m4.735s 00:05:43.372 16:23:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.372 16:23:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.631 16:23:14 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:43.631 16:23:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.631 16:23:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.631 16:23:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.631 ************************************ 00:05:43.631 START TEST hugepages 00:05:43.631 ************************************ 00:05:43.631 16:23:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:43.631 * Looking for test storage... 00:05:43.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:43.631 16:23:14 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:43.631 16:23:14 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:43.631 16:23:14 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:43.631 16:23:14 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:43.631 16:23:14 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:43.631 16:23:14 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:43.631 16:23:14 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:43.631 16:23:14 -- setup/common.sh@18 -- # local node= 00:05:43.631 16:23:14 -- setup/common.sh@19 -- # local var val 00:05:43.631 16:23:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.631 16:23:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.631 16:23:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.631 16:23:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.631 16:23:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.631 16:23:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 2124628 kB' 'MemAvailable: 7397008 kB' 'Buffers: 40452 kB' 'Cached: 5330420 kB' 'SwapCached: 0 kB' 'Active: 1379764 kB' 'Inactive: 4108896 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 128392 kB' 'Active(file): 1378704 kB' 'Inactive(file): 3980504 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 560 kB' 'Writeback: 0 kB' 'AnonPages: 147044 kB' 'Mapped: 68412 kB' 'Shmem: 2600 kB' 'KReclaimable: 234476 kB' 'Slab: 302988 kB' 'SReclaimable: 234476 kB' 'SUnreclaim: 68512 kB' 'KernelStack: 4652 kB' 'PageTables: 4136 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024336 kB' 'Committed_AS: 513816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19660 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.631 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.631 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # continue 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.632 16:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.632 16:23:14 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:43.632 16:23:14 -- setup/common.sh@33 -- # echo 2048 00:05:43.632 16:23:14 -- setup/common.sh@33 -- # return 0 00:05:43.632 16:23:14 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:43.632 16:23:14 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:43.632 16:23:14 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:43.632 16:23:14 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:43.632 16:23:14 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:43.633 16:23:14 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:43.633 16:23:14 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:43.633 16:23:14 -- setup/hugepages.sh@207 -- # get_nodes 00:05:43.633 16:23:14 -- setup/hugepages.sh@27 -- # local node 00:05:43.633 16:23:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.633 16:23:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:43.633 16:23:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:43.633 16:23:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:43.633 16:23:14 -- setup/hugepages.sh@208 -- # clear_hp 00:05:43.633 16:23:14 -- setup/hugepages.sh@37 -- # local node hp 00:05:43.633 16:23:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:43.633 16:23:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:43.633 16:23:14 -- setup/hugepages.sh@41 -- # echo 0 00:05:43.633 16:23:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:43.633 16:23:14 -- setup/hugepages.sh@41 -- # echo 0 00:05:43.633 16:23:15 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:43.633 16:23:15 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:43.633 16:23:15 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:43.633 16:23:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.633 16:23:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.633 16:23:15 -- common/autotest_common.sh@10 -- # set +x 00:05:43.633 ************************************ 00:05:43.633 START TEST default_setup 00:05:43.633 ************************************ 00:05:43.633 16:23:15 -- common/autotest_common.sh@1104 -- # default_setup 00:05:43.633 16:23:15 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:43.633 16:23:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:43.633 16:23:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:43.633 16:23:15 -- setup/hugepages.sh@51 -- # shift 00:05:43.633 16:23:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:43.633 16:23:15 -- setup/hugepages.sh@52 -- # local node_ids 00:05:43.633 16:23:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:43.633 16:23:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:43.633 16:23:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:43.633 16:23:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:43.633 16:23:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:43.633 16:23:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:43.633 16:23:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:43.633 16:23:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:43.633 16:23:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:43.633 16:23:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:43.633 16:23:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:43.633 16:23:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:43.633 16:23:15 -- setup/hugepages.sh@73 -- # return 0 00:05:43.633 16:23:15 -- setup/hugepages.sh@137 -- # setup output 00:05:43.633 16:23:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.633 16:23:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:44.202 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:45.142 16:23:16 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:45.142 16:23:16 -- setup/hugepages.sh@89 -- # local node 00:05:45.142 16:23:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:45.142 16:23:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:45.142 16:23:16 -- setup/hugepages.sh@92 -- # local surp 00:05:45.142 16:23:16 -- setup/hugepages.sh@93 -- # local resv 00:05:45.142 16:23:16 -- setup/hugepages.sh@94 -- # local anon 00:05:45.142 16:23:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:45.142 16:23:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:45.142 16:23:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:45.142 16:23:16 -- setup/common.sh@18 -- # local node= 00:05:45.142 16:23:16 -- setup/common.sh@19 -- # local var val 00:05:45.142 16:23:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.142 16:23:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.142 16:23:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.142 16:23:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.142 16:23:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.142 16:23:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.142 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.142 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4206860 kB' 'MemAvailable: 9479188 kB' 'Buffers: 40452 kB' 'Cached: 5330428 kB' 'SwapCached: 0 kB' 'Active: 1379896 kB' 'Inactive: 4124344 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 143944 kB' 'Active(file): 1378816 kB' 'Inactive(file): 3980400 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 162472 kB' 'Mapped: 68316 kB' 'Shmem: 2596 kB' 'KReclaimable: 234416 kB' 'Slab: 302396 kB' 'SReclaimable: 234416 kB' 'SUnreclaim: 67980 kB' 'KernelStack: 4432 kB' 'PageTables: 3632 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 528832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19628 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.143 16:23:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.143 16:23:16 -- setup/common.sh@33 -- # echo 0 00:05:45.143 16:23:16 -- setup/common.sh@33 -- # return 0 00:05:45.143 16:23:16 -- setup/hugepages.sh@97 -- # anon=0 00:05:45.143 16:23:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:45.143 16:23:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.143 16:23:16 -- setup/common.sh@18 -- # local node= 00:05:45.143 16:23:16 -- setup/common.sh@19 -- # local var val 00:05:45.143 16:23:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.143 16:23:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.143 16:23:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.143 16:23:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.143 16:23:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.143 16:23:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.143 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4206860 kB' 'MemAvailable: 9479188 kB' 'Buffers: 40452 kB' 'Cached: 5330428 kB' 'SwapCached: 0 kB' 'Active: 1379904 kB' 'Inactive: 4124352 kB' 'Active(anon): 1088 kB' 'Inactive(anon): 143952 kB' 'Active(file): 1378816 kB' 'Inactive(file): 3980400 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 162248 kB' 'Mapped: 68316 kB' 'Shmem: 2596 kB' 'KReclaimable: 234416 kB' 'Slab: 302396 kB' 'SReclaimable: 234416 kB' 'SUnreclaim: 67980 kB' 'KernelStack: 4432 kB' 'PageTables: 3644 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 528832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19628 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.144 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.144 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.145 16:23:16 -- setup/common.sh@33 -- # echo 0 00:05:45.145 16:23:16 -- setup/common.sh@33 -- # return 0 00:05:45.145 16:23:16 -- setup/hugepages.sh@99 -- # surp=0 00:05:45.145 16:23:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:45.145 16:23:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:45.145 16:23:16 -- setup/common.sh@18 -- # local node= 00:05:45.145 16:23:16 -- setup/common.sh@19 -- # local var val 00:05:45.145 16:23:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.145 16:23:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.145 16:23:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.145 16:23:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.145 16:23:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.145 16:23:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4206608 kB' 'MemAvailable: 9478932 kB' 'Buffers: 40452 kB' 'Cached: 5330428 kB' 'SwapCached: 0 kB' 'Active: 1379888 kB' 'Inactive: 4124356 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143952 kB' 'Active(file): 1378816 kB' 'Inactive(file): 3980404 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 162676 kB' 'Mapped: 68268 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302388 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 67980 kB' 'KernelStack: 4464 kB' 'PageTables: 3736 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 528832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.145 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.145 16:23:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.146 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.146 16:23:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.146 16:23:16 -- setup/common.sh@33 -- # echo 0 00:05:45.146 16:23:16 -- setup/common.sh@33 -- # return 0 00:05:45.146 16:23:16 -- setup/hugepages.sh@100 -- # resv=0 00:05:45.146 nr_hugepages=1024 00:05:45.146 16:23:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:45.146 resv_hugepages=0 00:05:45.146 16:23:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:45.146 surplus_hugepages=0 00:05:45.146 16:23:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:45.146 anon_hugepages=0 00:05:45.146 16:23:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:45.146 16:23:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:45.146 16:23:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:45.406 16:23:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:45.406 16:23:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:45.406 16:23:16 -- setup/common.sh@18 -- # local node= 00:05:45.406 16:23:16 -- setup/common.sh@19 -- # local var val 00:05:45.406 16:23:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.406 16:23:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.406 16:23:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.406 16:23:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.406 16:23:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.406 16:23:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.406 16:23:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4207384 kB' 'MemAvailable: 9479708 kB' 'Buffers: 40452 kB' 'Cached: 5330428 kB' 'SwapCached: 0 kB' 'Active: 1379880 kB' 'Inactive: 4124272 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 143868 kB' 'Active(file): 1378816 kB' 'Inactive(file): 3980404 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 162292 kB' 'Mapped: 68160 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302388 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 67980 kB' 'KernelStack: 4448 kB' 'PageTables: 3704 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 528832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.406 16:23:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.406 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.406 16:23:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.406 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.406 16:23:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.406 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.406 16:23:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.406 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.406 16:23:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.406 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.406 16:23:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.406 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.406 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.407 16:23:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.407 16:23:16 -- setup/common.sh@33 -- # echo 1024 00:05:45.407 16:23:16 -- setup/common.sh@33 -- # return 0 00:05:45.407 16:23:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:45.407 16:23:16 -- setup/hugepages.sh@112 -- # get_nodes 00:05:45.407 16:23:16 -- setup/hugepages.sh@27 -- # local node 00:05:45.407 16:23:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.407 16:23:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:45.407 16:23:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:45.407 16:23:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:45.407 16:23:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:45.407 16:23:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:45.407 16:23:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:45.407 16:23:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.407 16:23:16 -- setup/common.sh@18 -- # local node=0 00:05:45.407 16:23:16 -- setup/common.sh@19 -- # local var val 00:05:45.407 16:23:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.407 16:23:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.407 16:23:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:45.407 16:23:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:45.407 16:23:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.407 16:23:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.407 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4207636 kB' 'MemUsed: 8035340 kB' 'SwapCached: 0 kB' 'Active: 1379880 kB' 'Inactive: 4123780 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 143376 kB' 'Active(file): 1378816 kB' 'Inactive(file): 3980404 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'FilePages: 5370880 kB' 'Mapped: 68160 kB' 'AnonPages: 162252 kB' 'Shmem: 2596 kB' 'KernelStack: 4468 kB' 'PageTables: 3588 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234408 kB' 'Slab: 302388 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 67980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # continue 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.408 16:23:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.408 16:23:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.408 16:23:16 -- setup/common.sh@33 -- # echo 0 00:05:45.408 16:23:16 -- setup/common.sh@33 -- # return 0 00:05:45.408 16:23:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:45.408 16:23:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:45.408 16:23:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:45.408 16:23:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:45.408 node0=1024 expecting 1024 00:05:45.408 16:23:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:45.408 16:23:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:45.408 00:05:45.408 real 0m1.635s 00:05:45.408 user 0m0.359s 00:05:45.408 sys 0m1.282s 00:05:45.408 16:23:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.408 16:23:16 -- common/autotest_common.sh@10 -- # set +x 00:05:45.408 ************************************ 00:05:45.408 END TEST default_setup 00:05:45.408 ************************************ 00:05:45.408 16:23:16 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:45.408 16:23:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.408 16:23:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.408 16:23:16 -- common/autotest_common.sh@10 -- # set +x 00:05:45.408 ************************************ 00:05:45.408 START TEST per_node_1G_alloc 00:05:45.408 ************************************ 00:05:45.408 16:23:16 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:45.408 16:23:16 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:45.408 16:23:16 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:45.408 16:23:16 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:45.408 16:23:16 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:45.408 16:23:16 -- setup/hugepages.sh@51 -- # shift 00:05:45.408 16:23:16 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:45.408 16:23:16 -- setup/hugepages.sh@52 -- # local node_ids 00:05:45.409 16:23:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:45.409 16:23:16 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:45.409 16:23:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:45.409 16:23:16 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:45.409 16:23:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:45.409 16:23:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:45.409 16:23:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:45.409 16:23:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:45.409 16:23:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:45.409 16:23:16 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:45.409 16:23:16 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:45.409 16:23:16 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:45.409 16:23:16 -- setup/hugepages.sh@73 -- # return 0 00:05:45.409 16:23:16 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:45.409 16:23:16 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:45.409 16:23:16 -- setup/hugepages.sh@146 -- # setup output 00:05:45.409 16:23:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.409 16:23:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.668 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:45.927 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.188 16:23:17 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:46.188 16:23:17 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:46.188 16:23:17 -- setup/hugepages.sh@89 -- # local node 00:05:46.188 16:23:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.188 16:23:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.188 16:23:17 -- setup/hugepages.sh@92 -- # local surp 00:05:46.188 16:23:17 -- setup/hugepages.sh@93 -- # local resv 00:05:46.188 16:23:17 -- setup/hugepages.sh@94 -- # local anon 00:05:46.188 16:23:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.188 16:23:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.188 16:23:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.188 16:23:17 -- setup/common.sh@18 -- # local node= 00:05:46.188 16:23:17 -- setup/common.sh@19 -- # local var val 00:05:46.188 16:23:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.188 16:23:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.188 16:23:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.188 16:23:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.188 16:23:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.188 16:23:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.188 16:23:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5259420 kB' 'MemAvailable: 10531744 kB' 'Buffers: 40452 kB' 'Cached: 5330428 kB' 'SwapCached: 0 kB' 'Active: 1379888 kB' 'Inactive: 4124092 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143688 kB' 'Active(file): 1378816 kB' 'Inactive(file): 3980404 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 162328 kB' 'Mapped: 68168 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302468 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68060 kB' 'KernelStack: 4416 kB' 'PageTables: 3640 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 528832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.188 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.188 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.189 16:23:17 -- setup/common.sh@33 -- # echo 0 00:05:46.189 16:23:17 -- setup/common.sh@33 -- # return 0 00:05:46.189 16:23:17 -- setup/hugepages.sh@97 -- # anon=0 00:05:46.189 16:23:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.189 16:23:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.189 16:23:17 -- setup/common.sh@18 -- # local node= 00:05:46.189 16:23:17 -- setup/common.sh@19 -- # local var val 00:05:46.189 16:23:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.189 16:23:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.189 16:23:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.189 16:23:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.189 16:23:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.189 16:23:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5259420 kB' 'MemAvailable: 10531744 kB' 'Buffers: 40452 kB' 'Cached: 5330428 kB' 'SwapCached: 0 kB' 'Active: 1379888 kB' 'Inactive: 4124260 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143856 kB' 'Active(file): 1378816 kB' 'Inactive(file): 3980404 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 162496 kB' 'Mapped: 68168 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302468 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68060 kB' 'KernelStack: 4384 kB' 'PageTables: 3568 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 528832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.189 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.189 16:23:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.190 16:23:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.190 16:23:17 -- setup/common.sh@33 -- # echo 0 00:05:46.190 16:23:17 -- setup/common.sh@33 -- # return 0 00:05:46.190 16:23:17 -- setup/hugepages.sh@99 -- # surp=0 00:05:46.190 16:23:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.190 16:23:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.190 16:23:17 -- setup/common.sh@18 -- # local node= 00:05:46.190 16:23:17 -- setup/common.sh@19 -- # local var val 00:05:46.190 16:23:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.190 16:23:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.190 16:23:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.190 16:23:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.190 16:23:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.190 16:23:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.190 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5259420 kB' 'MemAvailable: 10531744 kB' 'Buffers: 40452 kB' 'Cached: 5330428 kB' 'SwapCached: 0 kB' 'Active: 1379888 kB' 'Inactive: 4124000 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 143596 kB' 'Active(file): 1378816 kB' 'Inactive(file): 3980404 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 162236 kB' 'Mapped: 68168 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302468 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68060 kB' 'KernelStack: 4384 kB' 'PageTables: 3568 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 528832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.191 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.191 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.192 16:23:17 -- setup/common.sh@33 -- # echo 0 00:05:46.192 16:23:17 -- setup/common.sh@33 -- # return 0 00:05:46.192 16:23:17 -- setup/hugepages.sh@100 -- # resv=0 00:05:46.192 nr_hugepages=512 00:05:46.192 16:23:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:46.192 resv_hugepages=0 00:05:46.192 16:23:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.192 surplus_hugepages=0 00:05:46.192 16:23:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.192 anon_hugepages=0 00:05:46.192 16:23:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.192 16:23:17 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:46.192 16:23:17 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:46.192 16:23:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.192 16:23:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.192 16:23:17 -- setup/common.sh@18 -- # local node= 00:05:46.192 16:23:17 -- setup/common.sh@19 -- # local var val 00:05:46.192 16:23:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.192 16:23:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.192 16:23:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.192 16:23:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.192 16:23:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.192 16:23:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5259436 kB' 'MemAvailable: 10531760 kB' 'Buffers: 40452 kB' 'Cached: 5330428 kB' 'SwapCached: 0 kB' 'Active: 1379880 kB' 'Inactive: 4124084 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 143680 kB' 'Active(file): 1378816 kB' 'Inactive(file): 3980404 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 162336 kB' 'Mapped: 68680 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302468 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68060 kB' 'KernelStack: 4452 kB' 'PageTables: 3556 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 531284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.192 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.192 16:23:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.193 16:23:17 -- setup/common.sh@33 -- # echo 512 00:05:46.193 16:23:17 -- setup/common.sh@33 -- # return 0 00:05:46.193 16:23:17 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:46.193 16:23:17 -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.193 16:23:17 -- setup/hugepages.sh@27 -- # local node 00:05:46.193 16:23:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.193 16:23:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:46.193 16:23:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:46.193 16:23:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.193 16:23:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.193 16:23:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.193 16:23:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.193 16:23:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.193 16:23:17 -- setup/common.sh@18 -- # local node=0 00:05:46.193 16:23:17 -- setup/common.sh@19 -- # local var val 00:05:46.193 16:23:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.193 16:23:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.193 16:23:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.193 16:23:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.193 16:23:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.193 16:23:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5259436 kB' 'MemUsed: 6983540 kB' 'SwapCached: 0 kB' 'Active: 1379880 kB' 'Inactive: 4124344 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 143940 kB' 'Active(file): 1378816 kB' 'Inactive(file): 3980404 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'FilePages: 5370880 kB' 'Mapped: 68160 kB' 'AnonPages: 162336 kB' 'Shmem: 2596 kB' 'KernelStack: 4452 kB' 'PageTables: 3816 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234408 kB' 'Slab: 302468 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.193 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.193 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # continue 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.194 16:23:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.194 16:23:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.194 16:23:17 -- setup/common.sh@33 -- # echo 0 00:05:46.194 16:23:17 -- setup/common.sh@33 -- # return 0 00:05:46.194 16:23:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.194 16:23:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.194 16:23:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.194 16:23:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.194 16:23:17 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:46.194 node0=512 expecting 512 00:05:46.194 16:23:17 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:46.194 00:05:46.194 real 0m0.856s 00:05:46.194 user 0m0.339s 00:05:46.194 sys 0m0.568s 00:05:46.194 16:23:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.194 16:23:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.194 ************************************ 00:05:46.194 END TEST per_node_1G_alloc 00:05:46.194 ************************************ 00:05:46.194 16:23:17 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:46.194 16:23:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.194 16:23:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.194 16:23:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.194 ************************************ 00:05:46.194 START TEST even_2G_alloc 00:05:46.194 ************************************ 00:05:46.194 16:23:17 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:46.194 16:23:17 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:46.194 16:23:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:46.194 16:23:17 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:46.194 16:23:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:46.194 16:23:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:46.194 16:23:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:46.194 16:23:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:46.194 16:23:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:46.194 16:23:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:46.194 16:23:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:46.194 16:23:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:46.194 16:23:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:46.194 16:23:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:46.194 16:23:17 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:46.194 16:23:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:46.194 16:23:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:46.194 16:23:17 -- setup/hugepages.sh@83 -- # : 0 00:05:46.194 16:23:17 -- setup/hugepages.sh@84 -- # : 0 00:05:46.194 16:23:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:46.194 16:23:17 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:46.194 16:23:17 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:46.194 16:23:17 -- setup/hugepages.sh@153 -- # setup output 00:05:46.194 16:23:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.194 16:23:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:46.765 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:47.359 16:23:18 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:47.359 16:23:18 -- setup/hugepages.sh@89 -- # local node 00:05:47.359 16:23:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:47.359 16:23:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:47.359 16:23:18 -- setup/hugepages.sh@92 -- # local surp 00:05:47.359 16:23:18 -- setup/hugepages.sh@93 -- # local resv 00:05:47.359 16:23:18 -- setup/hugepages.sh@94 -- # local anon 00:05:47.359 16:23:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:47.359 16:23:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:47.359 16:23:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:47.359 16:23:18 -- setup/common.sh@18 -- # local node= 00:05:47.359 16:23:18 -- setup/common.sh@19 -- # local var val 00:05:47.359 16:23:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.359 16:23:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.359 16:23:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.359 16:23:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.359 16:23:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.359 16:23:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4210488 kB' 'MemAvailable: 9482824 kB' 'Buffers: 40460 kB' 'Cached: 5330432 kB' 'SwapCached: 0 kB' 'Active: 1379900 kB' 'Inactive: 4124360 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 143964 kB' 'Active(file): 1378836 kB' 'Inactive(file): 3980396 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 162688 kB' 'Mapped: 68684 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302612 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68204 kB' 'KernelStack: 4432 kB' 'PageTables: 3676 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 531416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19628 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.359 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.359 16:23:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.360 16:23:18 -- setup/common.sh@33 -- # echo 0 00:05:47.360 16:23:18 -- setup/common.sh@33 -- # return 0 00:05:47.360 16:23:18 -- setup/hugepages.sh@97 -- # anon=0 00:05:47.360 16:23:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:47.360 16:23:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.360 16:23:18 -- setup/common.sh@18 -- # local node= 00:05:47.360 16:23:18 -- setup/common.sh@19 -- # local var val 00:05:47.360 16:23:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.360 16:23:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.360 16:23:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.360 16:23:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.360 16:23:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.360 16:23:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4210488 kB' 'MemAvailable: 9482824 kB' 'Buffers: 40460 kB' 'Cached: 5330432 kB' 'SwapCached: 0 kB' 'Active: 1379900 kB' 'Inactive: 4124360 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 143964 kB' 'Active(file): 1378836 kB' 'Inactive(file): 3980396 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 162948 kB' 'Mapped: 68164 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302612 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68204 kB' 'KernelStack: 4432 kB' 'PageTables: 3676 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 528964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.360 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.360 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.361 16:23:18 -- setup/common.sh@33 -- # echo 0 00:05:47.361 16:23:18 -- setup/common.sh@33 -- # return 0 00:05:47.361 16:23:18 -- setup/hugepages.sh@99 -- # surp=0 00:05:47.361 16:23:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:47.361 16:23:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:47.361 16:23:18 -- setup/common.sh@18 -- # local node= 00:05:47.361 16:23:18 -- setup/common.sh@19 -- # local var val 00:05:47.361 16:23:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.361 16:23:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.361 16:23:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.361 16:23:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.361 16:23:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.361 16:23:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4210488 kB' 'MemAvailable: 9482824 kB' 'Buffers: 40460 kB' 'Cached: 5330432 kB' 'SwapCached: 0 kB' 'Active: 1379900 kB' 'Inactive: 4124116 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 143720 kB' 'Active(file): 1378836 kB' 'Inactive(file): 3980396 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 162632 kB' 'Mapped: 68164 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302612 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68204 kB' 'KernelStack: 4352 kB' 'PageTables: 3496 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 528964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19628 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.361 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.361 16:23:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.362 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.362 16:23:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.362 16:23:18 -- setup/common.sh@33 -- # echo 0 00:05:47.362 16:23:18 -- setup/common.sh@33 -- # return 0 00:05:47.362 16:23:18 -- setup/hugepages.sh@100 -- # resv=0 00:05:47.362 nr_hugepages=1024 00:05:47.362 16:23:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:47.362 resv_hugepages=0 00:05:47.362 surplus_hugepages=0 00:05:47.362 anon_hugepages=0 00:05:47.362 16:23:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:47.362 16:23:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:47.362 16:23:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:47.362 16:23:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:47.362 16:23:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:47.362 16:23:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:47.362 16:23:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:47.362 16:23:18 -- setup/common.sh@18 -- # local node= 00:05:47.362 16:23:18 -- setup/common.sh@19 -- # local var val 00:05:47.362 16:23:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.362 16:23:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.362 16:23:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.362 16:23:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.363 16:23:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.363 16:23:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4210488 kB' 'MemAvailable: 9482824 kB' 'Buffers: 40460 kB' 'Cached: 5330432 kB' 'SwapCached: 0 kB' 'Active: 1379900 kB' 'Inactive: 4123964 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 143568 kB' 'Active(file): 1378836 kB' 'Inactive(file): 3980396 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 162396 kB' 'Mapped: 68204 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302612 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68204 kB' 'KernelStack: 4452 kB' 'PageTables: 3556 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 528964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19628 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.363 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.363 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.364 16:23:18 -- setup/common.sh@33 -- # echo 1024 00:05:47.364 16:23:18 -- setup/common.sh@33 -- # return 0 00:05:47.364 16:23:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:47.364 16:23:18 -- setup/hugepages.sh@112 -- # get_nodes 00:05:47.364 16:23:18 -- setup/hugepages.sh@27 -- # local node 00:05:47.364 16:23:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:47.364 16:23:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:47.364 16:23:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:47.364 16:23:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:47.364 16:23:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:47.364 16:23:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:47.364 16:23:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:47.364 16:23:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.364 16:23:18 -- setup/common.sh@18 -- # local node=0 00:05:47.364 16:23:18 -- setup/common.sh@19 -- # local var val 00:05:47.364 16:23:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.364 16:23:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.364 16:23:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:47.364 16:23:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:47.364 16:23:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.364 16:23:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.364 16:23:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4210488 kB' 'MemUsed: 8032488 kB' 'SwapCached: 0 kB' 'Active: 1379900 kB' 'Inactive: 4124484 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 144088 kB' 'Active(file): 1378836 kB' 'Inactive(file): 3980396 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 5370892 kB' 'Mapped: 68204 kB' 'AnonPages: 162656 kB' 'Shmem: 2596 kB' 'KernelStack: 4520 kB' 'PageTables: 3816 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234408 kB' 'Slab: 302612 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.364 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.364 16:23:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # continue 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.365 16:23:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.365 16:23:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.365 16:23:18 -- setup/common.sh@33 -- # echo 0 00:05:47.365 16:23:18 -- setup/common.sh@33 -- # return 0 00:05:47.365 node0=1024 expecting 1024 00:05:47.365 ************************************ 00:05:47.365 END TEST even_2G_alloc 00:05:47.365 ************************************ 00:05:47.365 16:23:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:47.365 16:23:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:47.365 16:23:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:47.365 16:23:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:47.365 16:23:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:47.365 16:23:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:47.365 00:05:47.365 real 0m1.029s 00:05:47.365 user 0m0.285s 00:05:47.365 sys 0m0.796s 00:05:47.365 16:23:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.365 16:23:18 -- common/autotest_common.sh@10 -- # set +x 00:05:47.365 16:23:18 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:47.365 16:23:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.365 16:23:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.365 16:23:18 -- common/autotest_common.sh@10 -- # set +x 00:05:47.365 ************************************ 00:05:47.365 START TEST odd_alloc 00:05:47.365 ************************************ 00:05:47.365 16:23:18 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:47.365 16:23:18 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:47.365 16:23:18 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:47.365 16:23:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:47.365 16:23:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:47.365 16:23:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:47.365 16:23:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:47.365 16:23:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:47.365 16:23:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:47.365 16:23:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:47.365 16:23:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:47.365 16:23:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:47.365 16:23:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:47.365 16:23:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:47.365 16:23:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:47.365 16:23:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:47.365 16:23:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:47.365 16:23:18 -- setup/hugepages.sh@83 -- # : 0 00:05:47.365 16:23:18 -- setup/hugepages.sh@84 -- # : 0 00:05:47.365 16:23:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:47.365 16:23:18 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:47.365 16:23:18 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:47.365 16:23:18 -- setup/hugepages.sh@160 -- # setup output 00:05:47.365 16:23:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.365 16:23:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:47.933 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:48.872 16:23:20 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:48.872 16:23:20 -- setup/hugepages.sh@89 -- # local node 00:05:48.872 16:23:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:48.872 16:23:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:48.872 16:23:20 -- setup/hugepages.sh@92 -- # local surp 00:05:48.872 16:23:20 -- setup/hugepages.sh@93 -- # local resv 00:05:48.872 16:23:20 -- setup/hugepages.sh@94 -- # local anon 00:05:48.872 16:23:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:48.872 16:23:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:48.872 16:23:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:48.872 16:23:20 -- setup/common.sh@18 -- # local node= 00:05:48.872 16:23:20 -- setup/common.sh@19 -- # local var val 00:05:48.872 16:23:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.872 16:23:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.872 16:23:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.872 16:23:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.872 16:23:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.872 16:23:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.872 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.872 16:23:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4207104 kB' 'MemAvailable: 9479440 kB' 'Buffers: 40460 kB' 'Cached: 5330432 kB' 'SwapCached: 0 kB' 'Active: 1379916 kB' 'Inactive: 4119616 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 139228 kB' 'Active(file): 1378844 kB' 'Inactive(file): 3980388 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 157888 kB' 'Mapped: 67240 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302544 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68136 kB' 'KernelStack: 4352 kB' 'PageTables: 3400 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 515860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:48.872 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.872 16:23:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.872 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.872 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.872 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.872 16:23:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.872 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.872 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.872 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.872 16:23:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.872 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.872 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.872 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.872 16:23:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.872 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.872 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.872 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.872 16:23:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.872 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.873 16:23:20 -- setup/common.sh@33 -- # echo 0 00:05:48.873 16:23:20 -- setup/common.sh@33 -- # return 0 00:05:48.873 16:23:20 -- setup/hugepages.sh@97 -- # anon=0 00:05:48.873 16:23:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:48.873 16:23:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.873 16:23:20 -- setup/common.sh@18 -- # local node= 00:05:48.873 16:23:20 -- setup/common.sh@19 -- # local var val 00:05:48.873 16:23:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.873 16:23:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.873 16:23:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.873 16:23:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.873 16:23:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.873 16:23:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4207104 kB' 'MemAvailable: 9479440 kB' 'Buffers: 40460 kB' 'Cached: 5330432 kB' 'SwapCached: 0 kB' 'Active: 1379908 kB' 'Inactive: 4119616 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139228 kB' 'Active(file): 1378844 kB' 'Inactive(file): 3980388 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 157828 kB' 'Mapped: 67236 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302544 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68136 kB' 'KernelStack: 4352 kB' 'PageTables: 3400 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 515860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.873 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.873 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.874 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.874 16:23:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.874 16:23:20 -- setup/common.sh@33 -- # echo 0 00:05:48.874 16:23:20 -- setup/common.sh@33 -- # return 0 00:05:48.874 16:23:20 -- setup/hugepages.sh@99 -- # surp=0 00:05:48.874 16:23:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:48.874 16:23:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:48.874 16:23:20 -- setup/common.sh@18 -- # local node= 00:05:48.875 16:23:20 -- setup/common.sh@19 -- # local var val 00:05:48.875 16:23:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.875 16:23:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.875 16:23:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.875 16:23:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.875 16:23:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.875 16:23:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4207104 kB' 'MemAvailable: 9479440 kB' 'Buffers: 40460 kB' 'Cached: 5330432 kB' 'SwapCached: 0 kB' 'Active: 1379908 kB' 'Inactive: 4119736 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139348 kB' 'Active(file): 1378844 kB' 'Inactive(file): 3980388 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 157948 kB' 'Mapped: 67236 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302544 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68136 kB' 'KernelStack: 4336 kB' 'PageTables: 3364 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 515860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.875 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.875 16:23:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.876 16:23:20 -- setup/common.sh@33 -- # echo 0 00:05:48.876 16:23:20 -- setup/common.sh@33 -- # return 0 00:05:48.876 16:23:20 -- setup/hugepages.sh@100 -- # resv=0 00:05:48.876 16:23:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:48.876 nr_hugepages=1025 00:05:48.876 16:23:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:48.876 resv_hugepages=0 00:05:48.876 16:23:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:48.876 surplus_hugepages=0 00:05:48.876 16:23:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:48.876 anon_hugepages=0 00:05:48.876 16:23:20 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:48.876 16:23:20 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:48.876 16:23:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:48.876 16:23:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:48.876 16:23:20 -- setup/common.sh@18 -- # local node= 00:05:48.876 16:23:20 -- setup/common.sh@19 -- # local var val 00:05:48.876 16:23:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.876 16:23:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.876 16:23:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.876 16:23:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.876 16:23:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.876 16:23:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4207104 kB' 'MemAvailable: 9479440 kB' 'Buffers: 40460 kB' 'Cached: 5330432 kB' 'SwapCached: 0 kB' 'Active: 1379908 kB' 'Inactive: 4119488 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139100 kB' 'Active(file): 1378844 kB' 'Inactive(file): 3980388 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 157744 kB' 'Mapped: 67496 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302544 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68136 kB' 'KernelStack: 4404 kB' 'PageTables: 3372 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 515640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.876 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.876 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.877 16:23:20 -- setup/common.sh@33 -- # echo 1025 00:05:48.877 16:23:20 -- setup/common.sh@33 -- # return 0 00:05:48.877 16:23:20 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:48.877 16:23:20 -- setup/hugepages.sh@112 -- # get_nodes 00:05:48.877 16:23:20 -- setup/hugepages.sh@27 -- # local node 00:05:48.877 16:23:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:48.877 16:23:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:48.877 16:23:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:48.877 16:23:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:48.877 16:23:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:48.877 16:23:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:48.877 16:23:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:48.877 16:23:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.877 16:23:20 -- setup/common.sh@18 -- # local node=0 00:05:48.877 16:23:20 -- setup/common.sh@19 -- # local var val 00:05:48.877 16:23:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:48.877 16:23:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.877 16:23:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:48.877 16:23:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:48.877 16:23:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.877 16:23:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4207608 kB' 'MemUsed: 8035368 kB' 'SwapCached: 0 kB' 'Active: 1379908 kB' 'Inactive: 4119392 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139004 kB' 'Active(file): 1378844 kB' 'Inactive(file): 3980388 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 5370892 kB' 'Mapped: 67236 kB' 'AnonPages: 157688 kB' 'Shmem: 2596 kB' 'KernelStack: 4416 kB' 'PageTables: 3552 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234408 kB' 'Slab: 302548 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.877 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.877 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # continue 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:48.878 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:48.878 16:23:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.878 16:23:20 -- setup/common.sh@33 -- # echo 0 00:05:48.878 16:23:20 -- setup/common.sh@33 -- # return 0 00:05:48.878 16:23:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:48.878 16:23:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:48.878 16:23:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:48.878 16:23:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:48.878 16:23:20 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:48.878 node0=1025 expecting 1025 00:05:48.878 16:23:20 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:48.878 00:05:48.878 real 0m1.510s 00:05:48.878 user 0m0.335s 00:05:48.878 sys 0m1.154s 00:05:48.878 16:23:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.878 16:23:20 -- common/autotest_common.sh@10 -- # set +x 00:05:48.878 ************************************ 00:05:48.878 END TEST odd_alloc 00:05:48.878 ************************************ 00:05:48.878 16:23:20 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:48.878 16:23:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.878 16:23:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.878 16:23:20 -- common/autotest_common.sh@10 -- # set +x 00:05:48.878 ************************************ 00:05:48.878 START TEST custom_alloc 00:05:48.878 ************************************ 00:05:48.878 16:23:20 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:48.878 16:23:20 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:48.878 16:23:20 -- setup/hugepages.sh@169 -- # local node 00:05:48.878 16:23:20 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:48.878 16:23:20 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:48.878 16:23:20 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:48.878 16:23:20 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:48.878 16:23:20 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:48.878 16:23:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:48.878 16:23:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:48.878 16:23:20 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:48.878 16:23:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:48.878 16:23:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:48.878 16:23:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:48.878 16:23:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:48.878 16:23:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:48.878 16:23:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:48.878 16:23:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:48.878 16:23:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:48.878 16:23:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:48.878 16:23:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:48.878 16:23:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:48.878 16:23:20 -- setup/hugepages.sh@83 -- # : 0 00:05:48.878 16:23:20 -- setup/hugepages.sh@84 -- # : 0 00:05:48.878 16:23:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:48.878 16:23:20 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:48.878 16:23:20 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:48.878 16:23:20 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:48.879 16:23:20 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:48.879 16:23:20 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:48.879 16:23:20 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:48.879 16:23:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:48.879 16:23:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:48.879 16:23:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:48.879 16:23:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:48.879 16:23:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:48.879 16:23:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:48.879 16:23:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:48.879 16:23:20 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:48.879 16:23:20 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:48.879 16:23:20 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:48.879 16:23:20 -- setup/hugepages.sh@78 -- # return 0 00:05:48.879 16:23:20 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:48.879 16:23:20 -- setup/hugepages.sh@187 -- # setup output 00:05:48.879 16:23:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.879 16:23:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:49.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:49.446 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.709 16:23:20 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:49.709 16:23:20 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:49.709 16:23:20 -- setup/hugepages.sh@89 -- # local node 00:05:49.709 16:23:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:49.709 16:23:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:49.709 16:23:20 -- setup/hugepages.sh@92 -- # local surp 00:05:49.709 16:23:20 -- setup/hugepages.sh@93 -- # local resv 00:05:49.709 16:23:20 -- setup/hugepages.sh@94 -- # local anon 00:05:49.709 16:23:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:49.709 16:23:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:49.709 16:23:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:49.709 16:23:20 -- setup/common.sh@18 -- # local node= 00:05:49.709 16:23:20 -- setup/common.sh@19 -- # local var val 00:05:49.709 16:23:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:49.709 16:23:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.709 16:23:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.709 16:23:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.709 16:23:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.709 16:23:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5258864 kB' 'MemAvailable: 10531200 kB' 'Buffers: 40460 kB' 'Cached: 5330432 kB' 'SwapCached: 0 kB' 'Active: 1379932 kB' 'Inactive: 4119900 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 139528 kB' 'Active(file): 1378860 kB' 'Inactive(file): 3980372 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 158480 kB' 'Mapped: 67260 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302148 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 67740 kB' 'KernelStack: 4488 kB' 'PageTables: 4172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 515860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.709 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.709 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.710 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.710 16:23:20 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.710 16:23:21 -- setup/common.sh@33 -- # echo 0 00:05:49.710 16:23:21 -- setup/common.sh@33 -- # return 0 00:05:49.710 16:23:21 -- setup/hugepages.sh@97 -- # anon=0 00:05:49.710 16:23:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:49.710 16:23:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.710 16:23:21 -- setup/common.sh@18 -- # local node= 00:05:49.710 16:23:21 -- setup/common.sh@19 -- # local var val 00:05:49.710 16:23:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:49.710 16:23:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.710 16:23:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.710 16:23:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.710 16:23:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.710 16:23:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5259116 kB' 'MemAvailable: 10531452 kB' 'Buffers: 40460 kB' 'Cached: 5330432 kB' 'SwapCached: 0 kB' 'Active: 1379932 kB' 'Inactive: 4119676 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 139304 kB' 'Active(file): 1378860 kB' 'Inactive(file): 3980372 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 157952 kB' 'Mapped: 67260 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302300 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 67892 kB' 'KernelStack: 4368 kB' 'PageTables: 3660 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 515860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.710 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.710 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.711 16:23:21 -- setup/common.sh@33 -- # echo 0 00:05:49.711 16:23:21 -- setup/common.sh@33 -- # return 0 00:05:49.711 16:23:21 -- setup/hugepages.sh@99 -- # surp=0 00:05:49.711 16:23:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:49.711 16:23:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:49.711 16:23:21 -- setup/common.sh@18 -- # local node= 00:05:49.711 16:23:21 -- setup/common.sh@19 -- # local var val 00:05:49.711 16:23:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:49.711 16:23:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.711 16:23:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.711 16:23:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.711 16:23:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.711 16:23:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5259324 kB' 'MemAvailable: 10531660 kB' 'Buffers: 40460 kB' 'Cached: 5330432 kB' 'SwapCached: 0 kB' 'Active: 1379932 kB' 'Inactive: 4119752 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 139380 kB' 'Active(file): 1378860 kB' 'Inactive(file): 3980372 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 158020 kB' 'Mapped: 67260 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302300 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 67892 kB' 'KernelStack: 4348 kB' 'PageTables: 3472 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 515860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.711 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.711 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.712 16:23:21 -- setup/common.sh@33 -- # echo 0 00:05:49.712 16:23:21 -- setup/common.sh@33 -- # return 0 00:05:49.712 16:23:21 -- setup/hugepages.sh@100 -- # resv=0 00:05:49.712 16:23:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:49.712 nr_hugepages=512 00:05:49.712 16:23:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:49.712 resv_hugepages=0 00:05:49.712 16:23:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:49.712 surplus_hugepages=0 00:05:49.712 16:23:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:49.712 anon_hugepages=0 00:05:49.712 16:23:21 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:49.712 16:23:21 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:49.712 16:23:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:49.712 16:23:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:49.712 16:23:21 -- setup/common.sh@18 -- # local node= 00:05:49.712 16:23:21 -- setup/common.sh@19 -- # local var val 00:05:49.712 16:23:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:49.712 16:23:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.712 16:23:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.712 16:23:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.712 16:23:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.712 16:23:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5259324 kB' 'MemAvailable: 10531660 kB' 'Buffers: 40460 kB' 'Cached: 5330432 kB' 'SwapCached: 0 kB' 'Active: 1379932 kB' 'Inactive: 4119372 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 139000 kB' 'Active(file): 1378860 kB' 'Inactive(file): 3980372 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 157632 kB' 'Mapped: 67260 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302300 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 67892 kB' 'KernelStack: 4416 kB' 'PageTables: 3472 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 515860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.712 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.712 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.713 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.713 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.714 16:23:21 -- setup/common.sh@33 -- # echo 512 00:05:49.714 16:23:21 -- setup/common.sh@33 -- # return 0 00:05:49.714 16:23:21 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:49.714 16:23:21 -- setup/hugepages.sh@112 -- # get_nodes 00:05:49.714 16:23:21 -- setup/hugepages.sh@27 -- # local node 00:05:49.714 16:23:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:49.714 16:23:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:49.714 16:23:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:49.714 16:23:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:49.714 16:23:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:49.714 16:23:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:49.714 16:23:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:49.714 16:23:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.714 16:23:21 -- setup/common.sh@18 -- # local node=0 00:05:49.714 16:23:21 -- setup/common.sh@19 -- # local var val 00:05:49.714 16:23:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:49.714 16:23:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.714 16:23:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:49.714 16:23:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:49.714 16:23:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.714 16:23:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 16:23:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5259576 kB' 'MemUsed: 6983400 kB' 'SwapCached: 0 kB' 'Active: 1379932 kB' 'Inactive: 4119472 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 139100 kB' 'Active(file): 1378860 kB' 'Inactive(file): 3980372 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 5370892 kB' 'Mapped: 67260 kB' 'AnonPages: 157720 kB' 'Shmem: 2596 kB' 'KernelStack: 4368 kB' 'PageTables: 3364 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234408 kB' 'Slab: 302300 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 67892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.714 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.714 16:23:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # continue 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:49.974 16:23:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:49.974 16:23:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.974 16:23:21 -- setup/common.sh@33 -- # echo 0 00:05:49.974 16:23:21 -- setup/common.sh@33 -- # return 0 00:05:49.974 node0=512 expecting 512 00:05:49.974 ************************************ 00:05:49.974 END TEST custom_alloc 00:05:49.974 16:23:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:49.974 16:23:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:49.974 16:23:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:49.974 16:23:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:49.974 16:23:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:49.974 16:23:21 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:49.974 00:05:49.974 real 0m0.871s 00:05:49.974 user 0m0.260s 00:05:49.974 sys 0m0.578s 00:05:49.974 16:23:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.974 16:23:21 -- common/autotest_common.sh@10 -- # set +x 00:05:49.974 ************************************ 00:05:49.974 16:23:21 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:49.974 16:23:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.974 16:23:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.974 16:23:21 -- common/autotest_common.sh@10 -- # set +x 00:05:49.974 ************************************ 00:05:49.974 START TEST no_shrink_alloc 00:05:49.974 ************************************ 00:05:49.975 16:23:21 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:49.975 16:23:21 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:49.975 16:23:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:49.975 16:23:21 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:49.975 16:23:21 -- setup/hugepages.sh@51 -- # shift 00:05:49.975 16:23:21 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:49.975 16:23:21 -- setup/hugepages.sh@52 -- # local node_ids 00:05:49.975 16:23:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:49.975 16:23:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:49.975 16:23:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:49.975 16:23:21 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:49.975 16:23:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:49.975 16:23:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:49.975 16:23:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:49.975 16:23:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:49.975 16:23:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:49.975 16:23:21 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:49.975 16:23:21 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:49.975 16:23:21 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:49.975 16:23:21 -- setup/hugepages.sh@73 -- # return 0 00:05:49.975 16:23:21 -- setup/hugepages.sh@198 -- # setup output 00:05:49.975 16:23:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.975 16:23:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:50.234 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:50.234 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:51.174 16:23:22 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:51.174 16:23:22 -- setup/hugepages.sh@89 -- # local node 00:05:51.174 16:23:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:51.174 16:23:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:51.174 16:23:22 -- setup/hugepages.sh@92 -- # local surp 00:05:51.174 16:23:22 -- setup/hugepages.sh@93 -- # local resv 00:05:51.174 16:23:22 -- setup/hugepages.sh@94 -- # local anon 00:05:51.174 16:23:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:51.174 16:23:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:51.174 16:23:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:51.174 16:23:22 -- setup/common.sh@18 -- # local node= 00:05:51.174 16:23:22 -- setup/common.sh@19 -- # local var val 00:05:51.174 16:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:51.174 16:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.174 16:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.174 16:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.174 16:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.174 16:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4209128 kB' 'MemAvailable: 9481464 kB' 'Buffers: 40460 kB' 'Cached: 5330436 kB' 'SwapCached: 0 kB' 'Active: 1379960 kB' 'Inactive: 4119768 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 139424 kB' 'Active(file): 1378888 kB' 'Inactive(file): 3980344 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 158080 kB' 'Mapped: 67500 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302416 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68008 kB' 'KernelStack: 4464 kB' 'PageTables: 3308 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 515860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.174 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.174 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.175 16:23:22 -- setup/common.sh@33 -- # echo 0 00:05:51.175 16:23:22 -- setup/common.sh@33 -- # return 0 00:05:51.175 16:23:22 -- setup/hugepages.sh@97 -- # anon=0 00:05:51.175 16:23:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:51.175 16:23:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:51.175 16:23:22 -- setup/common.sh@18 -- # local node= 00:05:51.175 16:23:22 -- setup/common.sh@19 -- # local var val 00:05:51.175 16:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:51.175 16:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.175 16:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.175 16:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.175 16:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.175 16:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4209128 kB' 'MemAvailable: 9481464 kB' 'Buffers: 40460 kB' 'Cached: 5330436 kB' 'SwapCached: 0 kB' 'Active: 1379960 kB' 'Inactive: 4119768 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 139424 kB' 'Active(file): 1378888 kB' 'Inactive(file): 3980344 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 158080 kB' 'Mapped: 67500 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302416 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68008 kB' 'KernelStack: 4464 kB' 'PageTables: 3308 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 515860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.175 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.175 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.176 16:23:22 -- setup/common.sh@33 -- # echo 0 00:05:51.176 16:23:22 -- setup/common.sh@33 -- # return 0 00:05:51.176 16:23:22 -- setup/hugepages.sh@99 -- # surp=0 00:05:51.176 16:23:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:51.176 16:23:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:51.176 16:23:22 -- setup/common.sh@18 -- # local node= 00:05:51.176 16:23:22 -- setup/common.sh@19 -- # local var val 00:05:51.176 16:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:51.176 16:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.176 16:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.176 16:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.176 16:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.176 16:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4209128 kB' 'MemAvailable: 9481464 kB' 'Buffers: 40460 kB' 'Cached: 5330436 kB' 'SwapCached: 0 kB' 'Active: 1379960 kB' 'Inactive: 4119660 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 139316 kB' 'Active(file): 1378888 kB' 'Inactive(file): 3980344 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 158028 kB' 'Mapped: 67500 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302416 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68008 kB' 'KernelStack: 4428 kB' 'PageTables: 3388 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 515860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.176 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.176 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.177 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.177 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.177 16:23:22 -- setup/common.sh@33 -- # echo 0 00:05:51.177 16:23:22 -- setup/common.sh@33 -- # return 0 00:05:51.177 16:23:22 -- setup/hugepages.sh@100 -- # resv=0 00:05:51.177 nr_hugepages=1024 00:05:51.177 16:23:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:51.177 resv_hugepages=0 00:05:51.177 16:23:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:51.177 surplus_hugepages=0 00:05:51.177 16:23:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:51.177 anon_hugepages=0 00:05:51.177 16:23:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:51.177 16:23:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:51.177 16:23:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:51.177 16:23:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:51.177 16:23:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:51.177 16:23:22 -- setup/common.sh@18 -- # local node= 00:05:51.177 16:23:22 -- setup/common.sh@19 -- # local var val 00:05:51.177 16:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:51.177 16:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.177 16:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.177 16:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.177 16:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.177 16:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.178 16:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4209128 kB' 'MemAvailable: 9481464 kB' 'Buffers: 40460 kB' 'Cached: 5330436 kB' 'SwapCached: 0 kB' 'Active: 1379960 kB' 'Inactive: 4119844 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 139500 kB' 'Active(file): 1378888 kB' 'Inactive(file): 3980344 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 158212 kB' 'Mapped: 67280 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302416 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68008 kB' 'KernelStack: 4412 kB' 'PageTables: 3348 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 515860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.178 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.178 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.179 16:23:22 -- setup/common.sh@33 -- # echo 1024 00:05:51.179 16:23:22 -- setup/common.sh@33 -- # return 0 00:05:51.179 16:23:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:51.179 16:23:22 -- setup/hugepages.sh@112 -- # get_nodes 00:05:51.179 16:23:22 -- setup/hugepages.sh@27 -- # local node 00:05:51.179 16:23:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:51.179 16:23:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:51.179 16:23:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:51.179 16:23:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:51.179 16:23:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:51.179 16:23:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:51.179 16:23:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:51.179 16:23:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:51.179 16:23:22 -- setup/common.sh@18 -- # local node=0 00:05:51.179 16:23:22 -- setup/common.sh@19 -- # local var val 00:05:51.179 16:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:51.179 16:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.179 16:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:51.179 16:23:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:51.179 16:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.179 16:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4209128 kB' 'MemUsed: 8033848 kB' 'SwapCached: 0 kB' 'Active: 1379960 kB' 'Inactive: 4119608 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 139260 kB' 'Active(file): 1378888 kB' 'Inactive(file): 3980348 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5370896 kB' 'Mapped: 67280 kB' 'AnonPages: 157972 kB' 'Shmem: 2596 kB' 'KernelStack: 4412 kB' 'PageTables: 3348 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234408 kB' 'Slab: 302416 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.179 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.179 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.180 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.180 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.180 16:23:22 -- setup/common.sh@33 -- # echo 0 00:05:51.180 16:23:22 -- setup/common.sh@33 -- # return 0 00:05:51.180 16:23:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:51.180 16:23:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:51.180 16:23:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:51.180 16:23:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:51.180 node0=1024 expecting 1024 00:05:51.180 16:23:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:51.180 16:23:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:51.180 16:23:22 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:51.180 16:23:22 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:51.180 16:23:22 -- setup/hugepages.sh@202 -- # setup output 00:05:51.180 16:23:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:51.180 16:23:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:51.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:51.440 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:51.440 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:51.440 16:23:22 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:51.440 16:23:22 -- setup/hugepages.sh@89 -- # local node 00:05:51.440 16:23:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:51.440 16:23:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:51.440 16:23:22 -- setup/hugepages.sh@92 -- # local surp 00:05:51.440 16:23:22 -- setup/hugepages.sh@93 -- # local resv 00:05:51.440 16:23:22 -- setup/hugepages.sh@94 -- # local anon 00:05:51.440 16:23:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:51.440 16:23:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:51.440 16:23:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:51.440 16:23:22 -- setup/common.sh@18 -- # local node= 00:05:51.440 16:23:22 -- setup/common.sh@19 -- # local var val 00:05:51.440 16:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:51.440 16:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.440 16:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.440 16:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.440 16:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.440 16:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.440 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.440 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.440 16:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4206636 kB' 'MemAvailable: 9478976 kB' 'Buffers: 40460 kB' 'Cached: 5330436 kB' 'SwapCached: 0 kB' 'Active: 1379956 kB' 'Inactive: 4120268 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139924 kB' 'Active(file): 1378892 kB' 'Inactive(file): 3980344 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 158652 kB' 'Mapped: 67452 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302696 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68288 kB' 'KernelStack: 4504 kB' 'PageTables: 3996 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 515836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:51.440 16:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.440 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.441 16:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:51.441 16:23:22 -- setup/common.sh@33 -- # echo 0 00:05:51.441 16:23:22 -- setup/common.sh@33 -- # return 0 00:05:51.441 16:23:22 -- setup/hugepages.sh@97 -- # anon=0 00:05:51.441 16:23:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:51.441 16:23:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:51.441 16:23:22 -- setup/common.sh@18 -- # local node= 00:05:51.441 16:23:22 -- setup/common.sh@19 -- # local var val 00:05:51.441 16:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:51.441 16:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.441 16:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.441 16:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.441 16:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.441 16:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.441 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4206636 kB' 'MemAvailable: 9478976 kB' 'Buffers: 40460 kB' 'Cached: 5330436 kB' 'SwapCached: 0 kB' 'Active: 1379956 kB' 'Inactive: 4120328 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139984 kB' 'Active(file): 1378892 kB' 'Inactive(file): 3980344 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 158720 kB' 'Mapped: 67452 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302696 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68288 kB' 'KernelStack: 4536 kB' 'PageTables: 4068 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 515836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.442 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.442 16:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.703 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.703 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.703 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.704 16:23:22 -- setup/common.sh@33 -- # echo 0 00:05:51.704 16:23:22 -- setup/common.sh@33 -- # return 0 00:05:51.704 16:23:22 -- setup/hugepages.sh@99 -- # surp=0 00:05:51.704 16:23:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:51.704 16:23:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:51.704 16:23:22 -- setup/common.sh@18 -- # local node= 00:05:51.704 16:23:22 -- setup/common.sh@19 -- # local var val 00:05:51.704 16:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:51.704 16:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.704 16:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.704 16:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.704 16:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.704 16:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4206876 kB' 'MemAvailable: 9479216 kB' 'Buffers: 40460 kB' 'Cached: 5330436 kB' 'SwapCached: 0 kB' 'Active: 1379956 kB' 'Inactive: 4119876 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139532 kB' 'Active(file): 1378892 kB' 'Inactive(file): 3980344 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 158212 kB' 'Mapped: 67452 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302696 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68288 kB' 'KernelStack: 4412 kB' 'PageTables: 3680 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 515836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.704 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.704 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:51.705 16:23:22 -- setup/common.sh@33 -- # echo 0 00:05:51.705 16:23:22 -- setup/common.sh@33 -- # return 0 00:05:51.705 16:23:22 -- setup/hugepages.sh@100 -- # resv=0 00:05:51.705 nr_hugepages=1024 00:05:51.705 16:23:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:51.705 resv_hugepages=0 00:05:51.705 16:23:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:51.705 surplus_hugepages=0 00:05:51.705 16:23:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:51.705 anon_hugepages=0 00:05:51.705 16:23:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:51.705 16:23:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:51.705 16:23:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:51.705 16:23:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:51.705 16:23:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:51.705 16:23:22 -- setup/common.sh@18 -- # local node= 00:05:51.705 16:23:22 -- setup/common.sh@19 -- # local var val 00:05:51.705 16:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:51.705 16:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.705 16:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.705 16:23:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.705 16:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.705 16:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4206876 kB' 'MemAvailable: 9479216 kB' 'Buffers: 40460 kB' 'Cached: 5330436 kB' 'SwapCached: 0 kB' 'Active: 1379956 kB' 'Inactive: 4119928 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139584 kB' 'Active(file): 1378892 kB' 'Inactive(file): 3980344 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 158272 kB' 'Mapped: 67452 kB' 'Shmem: 2596 kB' 'KReclaimable: 234408 kB' 'Slab: 302696 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68288 kB' 'KernelStack: 4376 kB' 'PageTables: 3752 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 515836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 2996224 kB' 'DirectMap1G: 11534336 kB' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.705 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.705 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:51.706 16:23:22 -- setup/common.sh@33 -- # echo 1024 00:05:51.706 16:23:22 -- setup/common.sh@33 -- # return 0 00:05:51.706 16:23:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:51.706 16:23:22 -- setup/hugepages.sh@112 -- # get_nodes 00:05:51.706 16:23:22 -- setup/hugepages.sh@27 -- # local node 00:05:51.706 16:23:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:51.706 16:23:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:51.706 16:23:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:51.706 16:23:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:51.706 16:23:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:51.706 16:23:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:51.706 16:23:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:51.706 16:23:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:51.706 16:23:22 -- setup/common.sh@18 -- # local node=0 00:05:51.706 16:23:22 -- setup/common.sh@19 -- # local var val 00:05:51.706 16:23:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:51.706 16:23:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.706 16:23:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:51.706 16:23:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:51.706 16:23:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.706 16:23:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4206876 kB' 'MemUsed: 8036100 kB' 'SwapCached: 0 kB' 'Active: 1379956 kB' 'Inactive: 4120088 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139744 kB' 'Active(file): 1378892 kB' 'Inactive(file): 3980344 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5370896 kB' 'Mapped: 67452 kB' 'AnonPages: 158376 kB' 'Shmem: 2596 kB' 'KernelStack: 4424 kB' 'PageTables: 3548 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234408 kB' 'Slab: 302696 kB' 'SReclaimable: 234408 kB' 'SUnreclaim: 68288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.706 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.706 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # continue 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:51.707 16:23:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:51.707 16:23:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:51.707 16:23:22 -- setup/common.sh@33 -- # echo 0 00:05:51.707 16:23:22 -- setup/common.sh@33 -- # return 0 00:05:51.707 16:23:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:51.707 16:23:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:51.707 16:23:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:51.707 16:23:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:51.707 node0=1024 expecting 1024 00:05:51.707 16:23:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:51.707 16:23:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:51.707 00:05:51.707 real 0m1.744s 00:05:51.707 user 0m0.522s 00:05:51.707 sys 0m1.330s 00:05:51.707 16:23:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.707 16:23:22 -- common/autotest_common.sh@10 -- # set +x 00:05:51.707 ************************************ 00:05:51.707 END TEST no_shrink_alloc 00:05:51.707 ************************************ 00:05:51.707 16:23:23 -- setup/hugepages.sh@217 -- # clear_hp 00:05:51.707 16:23:23 -- setup/hugepages.sh@37 -- # local node hp 00:05:51.707 16:23:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:51.707 16:23:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:51.707 16:23:23 -- setup/hugepages.sh@41 -- # echo 0 00:05:51.707 16:23:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:51.707 16:23:23 -- setup/hugepages.sh@41 -- # echo 0 00:05:51.707 16:23:23 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:51.707 16:23:23 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:51.707 00:05:51.707 real 0m8.189s 00:05:51.707 user 0m2.367s 00:05:51.707 sys 0m5.996s 00:05:51.707 16:23:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.707 16:23:23 -- common/autotest_common.sh@10 -- # set +x 00:05:51.707 ************************************ 00:05:51.707 END TEST hugepages 00:05:51.707 ************************************ 00:05:51.707 16:23:23 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:51.707 16:23:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.707 16:23:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.707 16:23:23 -- common/autotest_common.sh@10 -- # set +x 00:05:51.707 ************************************ 00:05:51.707 START TEST driver 00:05:51.707 ************************************ 00:05:51.707 16:23:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:51.966 * Looking for test storage... 00:05:51.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:51.966 16:23:23 -- setup/driver.sh@68 -- # setup reset 00:05:51.966 16:23:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:51.966 16:23:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:52.533 16:23:23 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:52.533 16:23:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.533 16:23:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.533 16:23:23 -- common/autotest_common.sh@10 -- # set +x 00:05:52.533 ************************************ 00:05:52.533 START TEST guess_driver 00:05:52.533 ************************************ 00:05:52.533 16:23:23 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:52.533 16:23:23 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:52.533 16:23:23 -- setup/driver.sh@47 -- # local fail=0 00:05:52.533 16:23:23 -- setup/driver.sh@49 -- # pick_driver 00:05:52.533 16:23:23 -- setup/driver.sh@36 -- # vfio 00:05:52.533 16:23:23 -- setup/driver.sh@21 -- # local iommu_grups 00:05:52.533 16:23:23 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:52.533 16:23:23 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:52.533 16:23:23 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:52.533 16:23:23 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:52.533 16:23:23 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:52.533 16:23:23 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:52.533 16:23:23 -- setup/driver.sh@32 -- # return 1 00:05:52.533 16:23:23 -- setup/driver.sh@38 -- # uio 00:05:52.533 16:23:23 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:52.533 16:23:23 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:52.533 16:23:23 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:52.533 16:23:23 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:52.533 16:23:23 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:05:52.533 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:52.533 16:23:23 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:52.533 16:23:23 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:52.533 16:23:23 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:52.533 Looking for driver=uio_pci_generic 00:05:52.533 16:23:23 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:52.533 16:23:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:52.533 16:23:23 -- setup/driver.sh@45 -- # setup output config 00:05:52.533 16:23:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.533 16:23:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:53.100 16:23:24 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:53.100 16:23:24 -- setup/driver.sh@58 -- # continue 00:05:53.100 16:23:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:53.100 16:23:24 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:53.100 16:23:24 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:53.100 16:23:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:54.035 16:23:25 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:54.035 16:23:25 -- setup/driver.sh@65 -- # setup reset 00:05:54.035 16:23:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:54.035 16:23:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:54.603 00:05:54.603 real 0m2.143s 00:05:54.603 user 0m0.451s 00:05:54.603 sys 0m1.696s 00:05:54.603 16:23:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.603 16:23:25 -- common/autotest_common.sh@10 -- # set +x 00:05:54.603 ************************************ 00:05:54.603 END TEST guess_driver 00:05:54.603 ************************************ 00:05:54.603 00:05:54.603 real 0m2.909s 00:05:54.603 user 0m0.762s 00:05:54.603 sys 0m2.159s 00:05:54.603 16:23:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.603 ************************************ 00:05:54.603 END TEST driver 00:05:54.603 ************************************ 00:05:54.603 16:23:26 -- common/autotest_common.sh@10 -- # set +x 00:05:54.862 16:23:26 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:54.862 16:23:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.862 16:23:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.862 16:23:26 -- common/autotest_common.sh@10 -- # set +x 00:05:54.862 ************************************ 00:05:54.862 START TEST devices 00:05:54.862 ************************************ 00:05:54.862 16:23:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:54.862 * Looking for test storage... 00:05:54.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:54.862 16:23:26 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:54.862 16:23:26 -- setup/devices.sh@192 -- # setup reset 00:05:54.862 16:23:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:54.862 16:23:26 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:55.429 16:23:26 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:55.429 16:23:26 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:55.429 16:23:26 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:55.429 16:23:26 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:55.429 16:23:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:55.429 16:23:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:55.429 16:23:26 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:55.429 16:23:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:55.429 16:23:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:55.429 16:23:26 -- setup/devices.sh@196 -- # blocks=() 00:05:55.429 16:23:26 -- setup/devices.sh@196 -- # declare -a blocks 00:05:55.429 16:23:26 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:55.429 16:23:26 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:55.429 16:23:26 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:55.429 16:23:26 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:55.429 16:23:26 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:55.429 16:23:26 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:55.429 16:23:26 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:55.429 16:23:26 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:55.429 16:23:26 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:55.429 16:23:26 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:55.429 16:23:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:55.429 No valid GPT data, bailing 00:05:55.429 16:23:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:55.429 16:23:26 -- scripts/common.sh@393 -- # pt= 00:05:55.429 16:23:26 -- scripts/common.sh@394 -- # return 1 00:05:55.429 16:23:26 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:55.429 16:23:26 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:55.429 16:23:26 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:55.429 16:23:26 -- setup/common.sh@80 -- # echo 5368709120 00:05:55.429 16:23:26 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:55.429 16:23:26 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:55.429 16:23:26 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:55.429 16:23:26 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:55.429 16:23:26 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:55.429 16:23:26 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:55.429 16:23:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.429 16:23:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.429 16:23:26 -- common/autotest_common.sh@10 -- # set +x 00:05:55.429 ************************************ 00:05:55.429 START TEST nvme_mount 00:05:55.430 ************************************ 00:05:55.430 16:23:26 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:55.430 16:23:26 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:55.430 16:23:26 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:55.430 16:23:26 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:55.430 16:23:26 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:55.430 16:23:26 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:55.430 16:23:26 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:55.430 16:23:26 -- setup/common.sh@40 -- # local part_no=1 00:05:55.430 16:23:26 -- setup/common.sh@41 -- # local size=1073741824 00:05:55.430 16:23:26 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:55.430 16:23:26 -- setup/common.sh@44 -- # parts=() 00:05:55.430 16:23:26 -- setup/common.sh@44 -- # local parts 00:05:55.430 16:23:26 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:55.430 16:23:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:55.430 16:23:26 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:55.430 16:23:26 -- setup/common.sh@46 -- # (( part++ )) 00:05:55.430 16:23:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:55.430 16:23:26 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:55.430 16:23:26 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:55.430 16:23:26 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:56.806 Creating new GPT entries in memory. 00:05:56.806 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:56.806 other utilities. 00:05:56.806 16:23:27 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:56.806 16:23:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:56.806 16:23:27 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:56.806 16:23:27 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:56.806 16:23:27 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:57.742 Creating new GPT entries in memory. 00:05:57.742 The operation has completed successfully. 00:05:57.742 16:23:28 -- setup/common.sh@57 -- # (( part++ )) 00:05:57.742 16:23:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:57.742 16:23:28 -- setup/common.sh@62 -- # wait 108323 00:05:57.742 16:23:28 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:57.742 16:23:28 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:57.742 16:23:28 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:57.742 16:23:28 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:57.742 16:23:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:57.742 16:23:28 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:57.742 16:23:28 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:57.742 16:23:28 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:57.742 16:23:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:57.742 16:23:28 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:57.742 16:23:28 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:57.742 16:23:28 -- setup/devices.sh@53 -- # local found=0 00:05:57.742 16:23:28 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:57.742 16:23:28 -- setup/devices.sh@56 -- # : 00:05:57.742 16:23:28 -- setup/devices.sh@59 -- # local pci status 00:05:57.742 16:23:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.742 16:23:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:57.742 16:23:28 -- setup/devices.sh@47 -- # setup output config 00:05:57.742 16:23:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:57.742 16:23:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:58.021 16:23:29 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:58.021 16:23:29 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:58.021 16:23:29 -- setup/devices.sh@63 -- # found=1 00:05:58.021 16:23:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.021 16:23:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:58.021 16:23:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.021 16:23:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:58.021 16:23:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.962 16:23:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:58.962 16:23:30 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:58.962 16:23:30 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:58.962 16:23:30 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:58.962 16:23:30 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:58.962 16:23:30 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:58.962 16:23:30 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:58.962 16:23:30 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:58.962 16:23:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:58.962 16:23:30 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:58.962 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:58.962 16:23:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:58.962 16:23:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:58.962 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:58.962 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:58.962 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:58.962 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:58.962 16:23:30 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:58.962 16:23:30 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:58.962 16:23:30 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:58.962 16:23:30 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:58.962 16:23:30 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:58.962 16:23:30 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:58.962 16:23:30 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:58.962 16:23:30 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:58.962 16:23:30 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:58.962 16:23:30 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:58.962 16:23:30 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:58.962 16:23:30 -- setup/devices.sh@53 -- # local found=0 00:05:58.962 16:23:30 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:58.962 16:23:30 -- setup/devices.sh@56 -- # : 00:05:58.962 16:23:30 -- setup/devices.sh@59 -- # local pci status 00:05:58.962 16:23:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.962 16:23:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:58.962 16:23:30 -- setup/devices.sh@47 -- # setup output config 00:05:58.962 16:23:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.963 16:23:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:59.222 16:23:30 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:59.222 16:23:30 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:59.222 16:23:30 -- setup/devices.sh@63 -- # found=1 00:05:59.222 16:23:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.222 16:23:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:59.222 16:23:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.482 16:23:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:59.482 16:23:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.417 16:23:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:00.417 16:23:31 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:00.417 16:23:31 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:00.417 16:23:31 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:00.417 16:23:31 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:00.417 16:23:31 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:00.417 16:23:31 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:06:00.417 16:23:31 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:00.417 16:23:31 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:00.417 16:23:31 -- setup/devices.sh@50 -- # local mount_point= 00:06:00.417 16:23:31 -- setup/devices.sh@51 -- # local test_file= 00:06:00.417 16:23:31 -- setup/devices.sh@53 -- # local found=0 00:06:00.417 16:23:31 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:00.417 16:23:31 -- setup/devices.sh@59 -- # local pci status 00:06:00.417 16:23:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.417 16:23:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:00.417 16:23:31 -- setup/devices.sh@47 -- # setup output config 00:06:00.417 16:23:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:00.417 16:23:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:00.676 16:23:32 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:00.676 16:23:32 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:00.676 16:23:32 -- setup/devices.sh@63 -- # found=1 00:06:00.676 16:23:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.676 16:23:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:00.676 16:23:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.936 16:23:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:00.936 16:23:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.873 16:23:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:01.873 16:23:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:01.873 16:23:33 -- setup/devices.sh@68 -- # return 0 00:06:01.873 16:23:33 -- setup/devices.sh@128 -- # cleanup_nvme 00:06:01.873 16:23:33 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:01.873 16:23:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:01.873 16:23:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:01.873 16:23:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:01.873 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:01.873 00:06:01.873 real 0m6.328s 00:06:01.873 user 0m0.735s 00:06:01.873 sys 0m3.602s 00:06:01.873 16:23:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.873 ************************************ 00:06:01.873 END TEST nvme_mount 00:06:01.873 ************************************ 00:06:01.873 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:06:01.873 16:23:33 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:01.873 16:23:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:01.873 16:23:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.873 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:06:01.873 ************************************ 00:06:01.873 START TEST dm_mount 00:06:01.873 ************************************ 00:06:01.873 16:23:33 -- common/autotest_common.sh@1104 -- # dm_mount 00:06:01.873 16:23:33 -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:01.873 16:23:33 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:01.873 16:23:33 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:01.873 16:23:33 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:01.873 16:23:33 -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:01.873 16:23:33 -- setup/common.sh@40 -- # local part_no=2 00:06:01.873 16:23:33 -- setup/common.sh@41 -- # local size=1073741824 00:06:01.873 16:23:33 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:01.873 16:23:33 -- setup/common.sh@44 -- # parts=() 00:06:01.873 16:23:33 -- setup/common.sh@44 -- # local parts 00:06:01.873 16:23:33 -- setup/common.sh@46 -- # (( part = 1 )) 00:06:01.873 16:23:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:01.873 16:23:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:01.873 16:23:33 -- setup/common.sh@46 -- # (( part++ )) 00:06:01.873 16:23:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:01.873 16:23:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:01.873 16:23:33 -- setup/common.sh@46 -- # (( part++ )) 00:06:01.873 16:23:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:01.873 16:23:33 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:01.873 16:23:33 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:01.873 16:23:33 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:02.809 Creating new GPT entries in memory. 00:06:02.809 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:02.809 other utilities. 00:06:02.809 16:23:34 -- setup/common.sh@57 -- # (( part = 1 )) 00:06:02.809 16:23:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:02.809 16:23:34 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:02.809 16:23:34 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:02.809 16:23:34 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:04.182 Creating new GPT entries in memory. 00:06:04.182 The operation has completed successfully. 00:06:04.182 16:23:35 -- setup/common.sh@57 -- # (( part++ )) 00:06:04.182 16:23:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:04.182 16:23:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:04.182 16:23:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:04.182 16:23:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:05.116 The operation has completed successfully. 00:06:05.116 16:23:36 -- setup/common.sh@57 -- # (( part++ )) 00:06:05.116 16:23:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:05.116 16:23:36 -- setup/common.sh@62 -- # wait 108809 00:06:05.116 16:23:36 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:05.116 16:23:36 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:05.116 16:23:36 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:05.116 16:23:36 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:05.116 16:23:36 -- setup/devices.sh@160 -- # for t in {1..5} 00:06:05.116 16:23:36 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:05.116 16:23:36 -- setup/devices.sh@161 -- # break 00:06:05.116 16:23:36 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:05.116 16:23:36 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:05.116 16:23:36 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:05.116 16:23:36 -- setup/devices.sh@166 -- # dm=dm-0 00:06:05.116 16:23:36 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:05.116 16:23:36 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:05.116 16:23:36 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:05.116 16:23:36 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:05.116 16:23:36 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:05.116 16:23:36 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:05.116 16:23:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:05.116 16:23:36 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:05.116 16:23:36 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:05.116 16:23:36 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:05.116 16:23:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:05.116 16:23:36 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:05.116 16:23:36 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:05.116 16:23:36 -- setup/devices.sh@53 -- # local found=0 00:06:05.116 16:23:36 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:05.116 16:23:36 -- setup/devices.sh@56 -- # : 00:06:05.116 16:23:36 -- setup/devices.sh@59 -- # local pci status 00:06:05.116 16:23:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.116 16:23:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:05.116 16:23:36 -- setup/devices.sh@47 -- # setup output config 00:06:05.116 16:23:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.116 16:23:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:05.373 16:23:36 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:05.374 16:23:36 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:05.374 16:23:36 -- setup/devices.sh@63 -- # found=1 00:06:05.374 16:23:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.374 16:23:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:05.374 16:23:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.631 16:23:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:05.631 16:23:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.565 16:23:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:06.565 16:23:37 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:06.565 16:23:37 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:06.565 16:23:37 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:06.565 16:23:37 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:06.565 16:23:37 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:06.565 16:23:37 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:06.565 16:23:37 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:06:06.565 16:23:37 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:06.565 16:23:37 -- setup/devices.sh@50 -- # local mount_point= 00:06:06.565 16:23:37 -- setup/devices.sh@51 -- # local test_file= 00:06:06.565 16:23:37 -- setup/devices.sh@53 -- # local found=0 00:06:06.565 16:23:37 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:06.565 16:23:37 -- setup/devices.sh@59 -- # local pci status 00:06:06.565 16:23:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.565 16:23:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:06:06.565 16:23:37 -- setup/devices.sh@47 -- # setup output config 00:06:06.565 16:23:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.565 16:23:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:06.831 16:23:38 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:06.831 16:23:38 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:06.831 16:23:38 -- setup/devices.sh@63 -- # found=1 00:06:06.831 16:23:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.831 16:23:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:06.831 16:23:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.831 16:23:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:06:06.831 16:23:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.783 16:23:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:07.783 16:23:39 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:07.783 16:23:39 -- setup/devices.sh@68 -- # return 0 00:06:07.783 16:23:39 -- setup/devices.sh@187 -- # cleanup_dm 00:06:07.783 16:23:39 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:07.783 16:23:39 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:07.783 16:23:39 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:08.042 16:23:39 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:08.042 16:23:39 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:08.042 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:08.042 16:23:39 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:08.042 16:23:39 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:08.042 00:06:08.042 real 0m6.103s 00:06:08.042 user 0m0.478s 00:06:08.042 sys 0m2.439s 00:06:08.042 16:23:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.042 ************************************ 00:06:08.042 END TEST dm_mount 00:06:08.042 16:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.042 ************************************ 00:06:08.042 16:23:39 -- setup/devices.sh@1 -- # cleanup 00:06:08.042 16:23:39 -- setup/devices.sh@11 -- # cleanup_nvme 00:06:08.042 16:23:39 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:08.042 16:23:39 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:08.042 16:23:39 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:08.042 16:23:39 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:08.042 16:23:39 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:08.042 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:08.042 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:08.042 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:08.042 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:08.042 16:23:39 -- setup/devices.sh@12 -- # cleanup_dm 00:06:08.043 16:23:39 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:08.043 16:23:39 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:08.043 16:23:39 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:08.043 16:23:39 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:08.043 16:23:39 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:08.043 16:23:39 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:08.043 00:06:08.043 real 0m13.388s 00:06:08.043 user 0m1.664s 00:06:08.043 sys 0m6.547s 00:06:08.043 16:23:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.043 16:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.043 ************************************ 00:06:08.043 END TEST devices 00:06:08.043 ************************************ 00:06:08.302 00:06:08.302 real 0m31.003s 00:06:08.302 user 0m6.526s 00:06:08.302 sys 0m19.637s 00:06:08.302 16:23:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.302 16:23:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.302 ************************************ 00:06:08.302 END TEST setup.sh 00:06:08.302 ************************************ 00:06:08.302 16:23:39 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:08.302 Hugepages 00:06:08.302 node hugesize free / total 00:06:08.302 node0 1048576kB 0 / 0 00:06:08.302 node0 2048kB 2048 / 2048 00:06:08.302 00:06:08.302 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:08.560 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:08.561 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:08.561 16:23:39 -- spdk/autotest.sh@141 -- # uname -s 00:06:08.561 16:23:39 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:06:08.561 16:23:39 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:06:08.561 16:23:39 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:09.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:09.128 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:10.505 16:23:41 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:11.442 16:23:42 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:11.442 16:23:42 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:11.442 16:23:42 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:06:11.442 16:23:42 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:06:11.442 16:23:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:11.442 16:23:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:11.442 16:23:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:11.442 16:23:42 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:11.442 16:23:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:11.442 16:23:42 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:11.442 16:23:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:06:11.442 16:23:42 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:11.700 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:11.700 Waiting for block devices as requested 00:06:11.700 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:06:11.958 16:23:43 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:06:11.958 16:23:43 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:06:11.958 16:23:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:11.958 16:23:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:06:11.958 16:23:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:06:11.958 16:23:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:06:11.958 16:23:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:06:11.958 16:23:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:11.958 16:23:43 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:06:11.958 16:23:43 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:06:11.958 16:23:43 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:06:11.958 16:23:43 -- common/autotest_common.sh@1530 -- # grep oacs 00:06:11.958 16:23:43 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:06:11.958 16:23:43 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:06:11.958 16:23:43 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:06:11.958 16:23:43 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:06:11.958 16:23:43 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:06:11.958 16:23:43 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:06:11.958 16:23:43 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:06:11.958 16:23:43 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:06:11.958 16:23:43 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:06:11.958 16:23:43 -- common/autotest_common.sh@1542 -- # continue 00:06:11.958 16:23:43 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:06:11.958 16:23:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:11.958 16:23:43 -- common/autotest_common.sh@10 -- # set +x 00:06:11.958 16:23:43 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:06:11.958 16:23:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:11.958 16:23:43 -- common/autotest_common.sh@10 -- # set +x 00:06:11.958 16:23:43 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:12.523 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:12.523 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:13.458 16:23:44 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:06:13.458 16:23:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:13.458 16:23:44 -- common/autotest_common.sh@10 -- # set +x 00:06:13.458 16:23:44 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:06:13.458 16:23:44 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:13.458 16:23:44 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:13.458 16:23:44 -- common/autotest_common.sh@1562 -- # bdfs=() 00:06:13.458 16:23:44 -- common/autotest_common.sh@1562 -- # local bdfs 00:06:13.458 16:23:44 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:13.458 16:23:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:13.458 16:23:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:13.458 16:23:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:13.458 16:23:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:13.458 16:23:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:13.717 16:23:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:13.717 16:23:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:06:13.717 16:23:44 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:06:13.717 16:23:44 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:06:13.717 16:23:44 -- common/autotest_common.sh@1565 -- # device=0x0010 00:06:13.717 16:23:44 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:13.717 16:23:44 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:06:13.717 16:23:44 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:13.717 16:23:44 -- common/autotest_common.sh@1578 -- # return 0 00:06:13.717 16:23:44 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:06:13.717 16:23:44 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:13.717 16:23:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.717 16:23:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.717 16:23:44 -- common/autotest_common.sh@10 -- # set +x 00:06:13.717 ************************************ 00:06:13.717 START TEST unittest 00:06:13.717 ************************************ 00:06:13.717 16:23:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:13.717 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:13.717 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:06:13.717 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:06:13.717 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:13.717 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:06:13.717 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:13.717 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:13.717 ++ rpc_py=rpc_cmd 00:06:13.717 ++ set -e 00:06:13.717 ++ shopt -s nullglob 00:06:13.717 ++ shopt -s extglob 00:06:13.717 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:13.717 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:13.717 +++ CONFIG_WPDK_DIR= 00:06:13.717 +++ CONFIG_ASAN=y 00:06:13.717 +++ CONFIG_VBDEV_COMPRESS=n 00:06:13.717 +++ CONFIG_HAVE_EXECINFO_H=y 00:06:13.717 +++ CONFIG_USDT=n 00:06:13.717 +++ CONFIG_CUSTOMOCF=n 00:06:13.717 +++ CONFIG_PREFIX=/usr/local 00:06:13.717 +++ CONFIG_RBD=n 00:06:13.717 +++ CONFIG_LIBDIR= 00:06:13.717 +++ CONFIG_IDXD=y 00:06:13.717 +++ CONFIG_NVME_CUSE=y 00:06:13.717 +++ CONFIG_SMA=n 00:06:13.717 +++ CONFIG_VTUNE=n 00:06:13.717 +++ CONFIG_TSAN=n 00:06:13.717 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:13.717 +++ CONFIG_VFIO_USER_DIR= 00:06:13.717 +++ CONFIG_PGO_CAPTURE=n 00:06:13.717 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:13.717 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:13.717 +++ CONFIG_LTO=n 00:06:13.717 +++ CONFIG_ISCSI_INITIATOR=y 00:06:13.717 +++ CONFIG_CET=n 00:06:13.717 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:13.717 +++ CONFIG_OCF_PATH= 00:06:13.717 +++ CONFIG_RDMA_SET_TOS=y 00:06:13.717 +++ CONFIG_HAVE_ARC4RANDOM=n 00:06:13.717 +++ CONFIG_HAVE_LIBARCHIVE=n 00:06:13.717 +++ CONFIG_UBLK=n 00:06:13.717 +++ CONFIG_ISAL_CRYPTO=y 00:06:13.717 +++ CONFIG_OPENSSL_PATH= 00:06:13.717 +++ CONFIG_OCF=n 00:06:13.717 +++ CONFIG_FUSE=n 00:06:13.717 +++ CONFIG_VTUNE_DIR= 00:06:13.717 +++ CONFIG_FUZZER_LIB= 00:06:13.717 +++ CONFIG_FUZZER=n 00:06:13.717 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:06:13.717 +++ CONFIG_CRYPTO=n 00:06:13.717 +++ CONFIG_PGO_USE=n 00:06:13.717 +++ CONFIG_VHOST=y 00:06:13.717 +++ CONFIG_DAOS=n 00:06:13.717 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:06:13.717 +++ CONFIG_DAOS_DIR= 00:06:13.717 +++ CONFIG_UNIT_TESTS=y 00:06:13.717 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:13.717 +++ CONFIG_VIRTIO=y 00:06:13.717 +++ CONFIG_COVERAGE=y 00:06:13.717 +++ CONFIG_RDMA=y 00:06:13.717 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:13.717 +++ CONFIG_URING_PATH= 00:06:13.717 +++ CONFIG_XNVME=n 00:06:13.717 +++ CONFIG_VFIO_USER=n 00:06:13.717 +++ CONFIG_ARCH=native 00:06:13.717 +++ CONFIG_URING_ZNS=n 00:06:13.717 +++ CONFIG_WERROR=y 00:06:13.717 +++ CONFIG_HAVE_LIBBSD=n 00:06:13.717 +++ CONFIG_UBSAN=y 00:06:13.717 +++ CONFIG_IPSEC_MB_DIR= 00:06:13.717 +++ CONFIG_GOLANG=n 00:06:13.717 +++ CONFIG_ISAL=y 00:06:13.717 +++ CONFIG_IDXD_KERNEL=n 00:06:13.717 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:13.717 +++ CONFIG_RDMA_PROV=verbs 00:06:13.717 +++ CONFIG_APPS=y 00:06:13.717 +++ CONFIG_SHARED=n 00:06:13.717 +++ CONFIG_FC_PATH= 00:06:13.717 +++ CONFIG_DPDK_PKG_CONFIG=n 00:06:13.717 +++ CONFIG_FC=n 00:06:13.717 +++ CONFIG_AVAHI=n 00:06:13.717 +++ CONFIG_FIO_PLUGIN=y 00:06:13.717 +++ CONFIG_RAID5F=y 00:06:13.717 +++ CONFIG_EXAMPLES=y 00:06:13.717 +++ CONFIG_TESTS=y 00:06:13.717 +++ CONFIG_CRYPTO_MLX5=n 00:06:13.717 +++ CONFIG_MAX_LCORES= 00:06:13.717 +++ CONFIG_IPSEC_MB=n 00:06:13.717 +++ CONFIG_DEBUG=y 00:06:13.717 +++ CONFIG_DPDK_COMPRESSDEV=n 00:06:13.717 +++ CONFIG_CROSS_PREFIX= 00:06:13.717 +++ CONFIG_URING=n 00:06:13.717 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:13.717 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:13.717 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:13.717 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:13.717 +++ _root=/home/vagrant/spdk_repo/spdk 00:06:13.717 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:13.717 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:13.717 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:13.717 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:13.717 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:13.717 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:13.717 +++ VHOST_APP=("$_app_dir/vhost") 00:06:13.717 +++ DD_APP=("$_app_dir/spdk_dd") 00:06:13.717 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:06:13.717 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:13.717 +++ [[ #ifndef SPDK_CONFIG_H 00:06:13.717 #define SPDK_CONFIG_H 00:06:13.717 #define SPDK_CONFIG_APPS 1 00:06:13.717 #define SPDK_CONFIG_ARCH native 00:06:13.717 #define SPDK_CONFIG_ASAN 1 00:06:13.717 #undef SPDK_CONFIG_AVAHI 00:06:13.717 #undef SPDK_CONFIG_CET 00:06:13.717 #define SPDK_CONFIG_COVERAGE 1 00:06:13.717 #define SPDK_CONFIG_CROSS_PREFIX 00:06:13.717 #undef SPDK_CONFIG_CRYPTO 00:06:13.717 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:13.717 #undef SPDK_CONFIG_CUSTOMOCF 00:06:13.717 #undef SPDK_CONFIG_DAOS 00:06:13.717 #define SPDK_CONFIG_DAOS_DIR 00:06:13.717 #define SPDK_CONFIG_DEBUG 1 00:06:13.717 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:13.717 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:06:13.717 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:06:13.717 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:06:13.717 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:13.717 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:13.717 #define SPDK_CONFIG_EXAMPLES 1 00:06:13.717 #undef SPDK_CONFIG_FC 00:06:13.717 #define SPDK_CONFIG_FC_PATH 00:06:13.717 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:13.717 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:13.717 #undef SPDK_CONFIG_FUSE 00:06:13.717 #undef SPDK_CONFIG_FUZZER 00:06:13.717 #define SPDK_CONFIG_FUZZER_LIB 00:06:13.717 #undef SPDK_CONFIG_GOLANG 00:06:13.717 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:06:13.717 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:13.717 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:13.717 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:13.717 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:13.717 #define SPDK_CONFIG_IDXD 1 00:06:13.717 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:13.717 #undef SPDK_CONFIG_IPSEC_MB 00:06:13.717 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:13.717 #define SPDK_CONFIG_ISAL 1 00:06:13.717 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:13.717 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:13.717 #define SPDK_CONFIG_LIBDIR 00:06:13.717 #undef SPDK_CONFIG_LTO 00:06:13.717 #define SPDK_CONFIG_MAX_LCORES 00:06:13.717 #define SPDK_CONFIG_NVME_CUSE 1 00:06:13.718 #undef SPDK_CONFIG_OCF 00:06:13.718 #define SPDK_CONFIG_OCF_PATH 00:06:13.718 #define SPDK_CONFIG_OPENSSL_PATH 00:06:13.718 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:13.718 #undef SPDK_CONFIG_PGO_USE 00:06:13.718 #define SPDK_CONFIG_PREFIX /usr/local 00:06:13.718 #define SPDK_CONFIG_RAID5F 1 00:06:13.718 #undef SPDK_CONFIG_RBD 00:06:13.718 #define SPDK_CONFIG_RDMA 1 00:06:13.718 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:13.718 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:13.718 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:13.718 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:13.718 #undef SPDK_CONFIG_SHARED 00:06:13.718 #undef SPDK_CONFIG_SMA 00:06:13.718 #define SPDK_CONFIG_TESTS 1 00:06:13.718 #undef SPDK_CONFIG_TSAN 00:06:13.718 #undef SPDK_CONFIG_UBLK 00:06:13.718 #define SPDK_CONFIG_UBSAN 1 00:06:13.718 #define SPDK_CONFIG_UNIT_TESTS 1 00:06:13.718 #undef SPDK_CONFIG_URING 00:06:13.718 #define SPDK_CONFIG_URING_PATH 00:06:13.718 #undef SPDK_CONFIG_URING_ZNS 00:06:13.718 #undef SPDK_CONFIG_USDT 00:06:13.718 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:13.718 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:13.718 #undef SPDK_CONFIG_VFIO_USER 00:06:13.718 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:13.718 #define SPDK_CONFIG_VHOST 1 00:06:13.718 #define SPDK_CONFIG_VIRTIO 1 00:06:13.718 #undef SPDK_CONFIG_VTUNE 00:06:13.718 #define SPDK_CONFIG_VTUNE_DIR 00:06:13.718 #define SPDK_CONFIG_WERROR 1 00:06:13.718 #define SPDK_CONFIG_WPDK_DIR 00:06:13.718 #undef SPDK_CONFIG_XNVME 00:06:13.718 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:13.718 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:13.718 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.718 +++ [[ -e /bin/wpdk_common.sh ]] 00:06:13.718 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.718 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.718 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:13.718 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:13.718 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:13.718 ++++ export PATH 00:06:13.718 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:13.718 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:13.718 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:13.718 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:13.718 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:13.718 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:13.718 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:13.718 +++ TEST_TAG=N/A 00:06:13.718 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:13.718 ++ : 1 00:06:13.718 ++ export RUN_NIGHTLY 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_RUN_VALGRIND 00:06:13.718 ++ : 1 00:06:13.718 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:06:13.718 ++ : 1 00:06:13.718 ++ export SPDK_TEST_UNITTEST 00:06:13.718 ++ : 00:06:13.718 ++ export SPDK_TEST_AUTOBUILD 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_RELEASE_BUILD 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_ISAL 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_ISCSI 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_ISCSI_INITIATOR 00:06:13.718 ++ : 1 00:06:13.718 ++ export SPDK_TEST_NVME 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_NVME_PMR 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_NVME_BP 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_NVME_CLI 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_NVME_CUSE 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_NVME_FDP 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_NVMF 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_VFIOUSER 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_VFIOUSER_QEMU 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_FUZZER 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_FUZZER_SHORT 00:06:13.718 ++ : rdma 00:06:13.718 ++ export SPDK_TEST_NVMF_TRANSPORT 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_RBD 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_VHOST 00:06:13.718 ++ : 1 00:06:13.718 ++ export SPDK_TEST_BLOCKDEV 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_IOAT 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_BLOBFS 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_VHOST_INIT 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_LVOL 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_VBDEV_COMPRESS 00:06:13.718 ++ : 1 00:06:13.718 ++ export SPDK_RUN_ASAN 00:06:13.718 ++ : 1 00:06:13.718 ++ export SPDK_RUN_UBSAN 00:06:13.718 ++ : /home/vagrant/spdk_repo/dpdk/build 00:06:13.718 ++ export SPDK_RUN_EXTERNAL_DPDK 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_RUN_NON_ROOT 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_CRYPTO 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_FTL 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_OCF 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_VMD 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_OPAL 00:06:13.718 ++ : v22.11.4 00:06:13.718 ++ export SPDK_TEST_NATIVE_DPDK 00:06:13.718 ++ : true 00:06:13.718 ++ export SPDK_AUTOTEST_X 00:06:13.718 ++ : 1 00:06:13.718 ++ export SPDK_TEST_RAID5 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_URING 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_USDT 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_USE_IGB_UIO 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_SCHEDULER 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_SCANBUILD 00:06:13.718 ++ : 00:06:13.718 ++ export SPDK_TEST_NVMF_NICS 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_SMA 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_DAOS 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_XNVME 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_ACCEL_DSA 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_ACCEL_IAA 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_ACCEL_IOAT 00:06:13.718 ++ : 00:06:13.718 ++ export SPDK_TEST_FUZZER_TARGET 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_TEST_NVMF_MDNS 00:06:13.718 ++ : 0 00:06:13.718 ++ export SPDK_JSONRPC_GO_CLIENT 00:06:13.718 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:13.718 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:13.718 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:13.718 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:13.718 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:13.718 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:13.718 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:13.718 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:13.718 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:13.718 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:06:13.718 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:13.718 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:13.718 ++ export PYTHONDONTWRITEBYTECODE=1 00:06:13.718 ++ PYTHONDONTWRITEBYTECODE=1 00:06:13.718 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:13.718 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:13.718 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:13.718 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:13.718 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:06:13.718 ++ rm -rf /var/tmp/asan_suppression_file 00:06:13.718 ++ cat 00:06:13.718 ++ echo leak:libfuse3.so 00:06:13.718 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:13.718 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:13.718 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:13.718 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:13.718 ++ '[' -z /var/spdk/dependencies ']' 00:06:13.718 ++ export DEPENDENCY_DIR 00:06:13.718 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:13.718 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:13.718 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:13.718 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:13.718 ++ export QEMU_BIN= 00:06:13.718 ++ QEMU_BIN= 00:06:13.718 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:13.718 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:13.718 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:13.718 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:13.718 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:13.718 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:13.718 ++ '[' 0 -eq 0 ']' 00:06:13.718 ++ export valgrind= 00:06:13.718 ++ valgrind= 00:06:13.718 +++ uname -s 00:06:13.718 ++ '[' Linux = Linux ']' 00:06:13.718 ++ HUGEMEM=4096 00:06:13.718 ++ export CLEAR_HUGE=yes 00:06:13.718 ++ CLEAR_HUGE=yes 00:06:13.718 ++ [[ 0 -eq 1 ]] 00:06:13.718 ++ [[ 0 -eq 1 ]] 00:06:13.718 ++ MAKE=make 00:06:13.718 +++ nproc 00:06:13.718 ++ MAKEFLAGS=-j10 00:06:13.718 ++ export HUGEMEM=4096 00:06:13.718 ++ HUGEMEM=4096 00:06:13.718 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:13.718 ++ NO_HUGE=() 00:06:13.718 ++ TEST_MODE= 00:06:13.718 ++ [[ -z '' ]] 00:06:13.718 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:13.718 ++ exec 00:06:13.718 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:13.718 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:06:13.718 ++ set_test_storage 2147483648 00:06:13.718 ++ [[ -v testdir ]] 00:06:13.718 ++ local requested_size=2147483648 00:06:13.718 ++ local mount target_dir 00:06:13.718 ++ local -A mounts fss sizes avails uses 00:06:13.718 ++ local source fs size avail mount use 00:06:13.718 ++ local storage_fallback storage_candidates 00:06:13.718 +++ mktemp -udt spdk.XXXXXX 00:06:13.718 ++ storage_fallback=/tmp/spdk.LBRwAI 00:06:13.718 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:13.718 ++ [[ -n '' ]] 00:06:13.718 ++ [[ -n '' ]] 00:06:13.718 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.LBRwAI/tests/unit /tmp/spdk.LBRwAI 00:06:13.718 ++ requested_size=2214592512 00:06:13.718 ++ read -r source fs size use avail _ mount 00:06:13.718 +++ df -T 00:06:13.718 +++ grep -v Filesystem 00:06:13.718 ++ mounts["$mount"]=tmpfs 00:06:13.718 ++ fss["$mount"]=tmpfs 00:06:13.718 ++ avails["$mount"]=1252601856 00:06:13.718 ++ sizes["$mount"]=1253683200 00:06:13.718 ++ uses["$mount"]=1081344 00:06:13.718 ++ read -r source fs size use avail _ mount 00:06:13.718 ++ mounts["$mount"]=/dev/vda1 00:06:13.718 ++ fss["$mount"]=ext4 00:06:13.718 ++ avails["$mount"]=9649983488 00:06:13.718 ++ sizes["$mount"]=20616794112 00:06:13.718 ++ uses["$mount"]=10950033408 00:06:13.718 ++ read -r source fs size use avail _ mount 00:06:13.718 ++ mounts["$mount"]=tmpfs 00:06:13.718 ++ fss["$mount"]=tmpfs 00:06:13.718 ++ avails["$mount"]=6268403712 00:06:13.718 ++ sizes["$mount"]=6268403712 00:06:13.718 ++ uses["$mount"]=0 00:06:13.718 ++ read -r source fs size use avail _ mount 00:06:13.718 ++ mounts["$mount"]=tmpfs 00:06:13.718 ++ fss["$mount"]=tmpfs 00:06:13.718 ++ avails["$mount"]=5242880 00:06:13.718 ++ sizes["$mount"]=5242880 00:06:13.718 ++ uses["$mount"]=0 00:06:13.718 ++ read -r source fs size use avail _ mount 00:06:13.718 ++ mounts["$mount"]=/dev/vda15 00:06:13.718 ++ fss["$mount"]=vfat 00:06:13.718 ++ avails["$mount"]=103061504 00:06:13.718 ++ sizes["$mount"]=109395968 00:06:13.718 ++ uses["$mount"]=6334464 00:06:13.718 ++ read -r source fs size use avail _ mount 00:06:13.718 ++ mounts["$mount"]=tmpfs 00:06:13.718 ++ fss["$mount"]=tmpfs 00:06:13.718 ++ avails["$mount"]=1253675008 00:06:13.718 ++ sizes["$mount"]=1253679104 00:06:13.718 ++ uses["$mount"]=4096 00:06:13.718 ++ read -r source fs size use avail _ mount 00:06:13.718 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:06:13.718 ++ fss["$mount"]=fuse.sshfs 00:06:13.718 ++ avails["$mount"]=95054999552 00:06:13.718 ++ sizes["$mount"]=105088212992 00:06:13.718 ++ uses["$mount"]=4647780352 00:06:13.718 ++ read -r source fs size use avail _ mount 00:06:13.718 ++ printf '* Looking for test storage...\n' 00:06:13.718 * Looking for test storage... 00:06:13.718 ++ local target_space new_size 00:06:13.718 ++ for target_dir in "${storage_candidates[@]}" 00:06:13.718 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:06:13.718 +++ awk '$1 !~ /Filesystem/{print $6}' 00:06:13.718 ++ mount=/ 00:06:13.718 ++ target_space=9649983488 00:06:13.718 ++ (( target_space == 0 || target_space < requested_size )) 00:06:13.718 ++ (( target_space >= requested_size )) 00:06:13.718 ++ [[ ext4 == tmpfs ]] 00:06:13.718 ++ [[ ext4 == ramfs ]] 00:06:13.718 ++ [[ / == / ]] 00:06:13.718 ++ new_size=13164625920 00:06:13.718 ++ (( new_size * 100 / sizes[/] > 95 )) 00:06:13.718 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:13.718 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:13.718 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:06:13.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:06:13.718 ++ return 0 00:06:13.718 ++ set -o errtrace 00:06:13.718 ++ shopt -s extdebug 00:06:13.718 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:06:13.718 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:13.718 16:23:45 -- common/autotest_common.sh@1672 -- # true 00:06:13.718 16:23:45 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:06:13.718 16:23:45 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:06:13.718 16:23:45 -- common/autotest_common.sh@29 -- # exec 00:06:13.718 16:23:45 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:13.718 16:23:45 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:13.718 16:23:45 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:13.718 16:23:45 -- common/autotest_common.sh@18 -- # set -x 00:06:13.718 16:23:45 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:06:13.718 16:23:45 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:06:13.718 16:23:45 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:06:13.718 16:23:45 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:06:13.718 16:23:45 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:06:13.718 16:23:45 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:06:13.718 16:23:45 -- unit/unittest.sh@179 -- # hash lcov 00:06:13.718 16:23:45 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:13.718 16:23:45 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:13.718 16:23:45 -- unit/unittest.sh@180 -- # cov_avail=yes 00:06:13.718 16:23:45 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:06:13.718 16:23:45 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:06:13.718 16:23:45 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:13.718 16:23:45 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:13.718 16:23:45 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:06:13.718 --rc lcov_branch_coverage=1 00:06:13.718 --rc lcov_function_coverage=1 00:06:13.718 --rc genhtml_branch_coverage=1 00:06:13.718 --rc genhtml_function_coverage=1 00:06:13.718 --rc genhtml_legend=1 00:06:13.718 --rc geninfo_all_blocks=1 00:06:13.718 ' 00:06:13.718 16:23:45 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:06:13.718 --rc lcov_branch_coverage=1 00:06:13.718 --rc lcov_function_coverage=1 00:06:13.718 --rc genhtml_branch_coverage=1 00:06:13.718 --rc genhtml_function_coverage=1 00:06:13.718 --rc genhtml_legend=1 00:06:13.718 --rc geninfo_all_blocks=1 00:06:13.718 ' 00:06:13.718 16:23:45 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:06:13.718 --rc lcov_branch_coverage=1 00:06:13.718 --rc lcov_function_coverage=1 00:06:13.718 --rc genhtml_branch_coverage=1 00:06:13.718 --rc genhtml_function_coverage=1 00:06:13.718 --rc genhtml_legend=1 00:06:13.718 --rc geninfo_all_blocks=1 00:06:13.718 --no-external' 00:06:13.718 16:23:45 -- unit/unittest.sh@200 -- # LCOV='lcov 00:06:13.718 --rc lcov_branch_coverage=1 00:06:13.718 --rc lcov_function_coverage=1 00:06:13.718 --rc genhtml_branch_coverage=1 00:06:13.718 --rc genhtml_function_coverage=1 00:06:13.718 --rc genhtml_legend=1 00:06:13.718 --rc geninfo_all_blocks=1 00:06:13.718 --no-external' 00:06:13.718 16:23:45 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:06:31.813 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:31.813 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:31.813 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:31.813 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:31.813 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:31.813 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:53.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:53.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:53.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:53.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:53.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:53.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:53.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:53.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:53.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:53.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:53.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:53.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:53.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:53.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:53.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:53.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:53.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:53.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:53.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:53.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:53.746 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:53.746 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:54.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:54.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:54.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:54.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:54.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:54.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:54.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:54.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:54.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:54.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:54.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:54.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:54.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:54.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:54.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:54.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:54.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:54.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:54.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:54.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:54.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:54.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:54.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:54.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:54.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:54.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:54.006 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:54.006 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:54.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:54.269 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:54.270 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:54.270 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:54.270 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:54.270 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:54.270 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:54.270 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:54.270 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:54.270 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:54.270 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:54.270 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:54.270 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:54.270 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:54.270 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:54.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:54.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:54.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:54.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:54.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:54.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:54.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:54.537 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:54.537 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:56.441 16:24:27 -- unit/unittest.sh@206 -- # uname -m 00:06:56.441 16:24:27 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:06:56.441 16:24:27 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:56.441 16:24:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:56.441 16:24:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.441 16:24:27 -- common/autotest_common.sh@10 -- # set +x 00:06:56.441 ************************************ 00:06:56.441 START TEST unittest_pci_event 00:06:56.441 ************************************ 00:06:56.441 16:24:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:56.441 00:06:56.441 00:06:56.441 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.441 http://cunit.sourceforge.net/ 00:06:56.441 00:06:56.441 00:06:56.441 Suite: pci_event 00:06:56.441 Test: test_pci_parse_event ...[2024-07-13 16:24:27.858801] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:56.441 [2024-07-13 16:24:27.859523] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:56.441 passed 00:06:56.441 00:06:56.441 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.441 suites 1 1 n/a 0 0 00:06:56.441 tests 1 1 1 0 0 00:06:56.441 asserts 15 15 15 0 n/a 00:06:56.441 00:06:56.441 Elapsed time = 0.001 seconds 00:06:56.441 00:06:56.441 real 0m0.045s 00:06:56.441 user 0m0.022s 00:06:56.441 sys 0m0.018s 00:06:56.441 16:24:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.441 16:24:27 -- common/autotest_common.sh@10 -- # set +x 00:06:56.441 ************************************ 00:06:56.441 END TEST unittest_pci_event 00:06:56.441 ************************************ 00:06:56.699 16:24:27 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:56.699 16:24:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:56.699 16:24:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.699 16:24:27 -- common/autotest_common.sh@10 -- # set +x 00:06:56.699 ************************************ 00:06:56.699 START TEST unittest_include 00:06:56.699 ************************************ 00:06:56.699 16:24:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:56.699 00:06:56.699 00:06:56.699 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.699 http://cunit.sourceforge.net/ 00:06:56.699 00:06:56.699 00:06:56.699 Suite: histogram 00:06:56.699 Test: histogram_test ...passed 00:06:56.699 Test: histogram_merge ...passed 00:06:56.699 00:06:56.699 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.699 suites 1 1 n/a 0 0 00:06:56.699 tests 2 2 2 0 0 00:06:56.699 asserts 50 50 50 0 n/a 00:06:56.699 00:06:56.699 Elapsed time = 0.006 seconds 00:06:56.699 00:06:56.699 real 0m0.038s 00:06:56.699 user 0m0.018s 00:06:56.699 sys 0m0.020s 00:06:56.699 16:24:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.700 16:24:27 -- common/autotest_common.sh@10 -- # set +x 00:06:56.700 ************************************ 00:06:56.700 END TEST unittest_include 00:06:56.700 ************************************ 00:06:56.700 16:24:28 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:06:56.700 16:24:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:56.700 16:24:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.700 16:24:28 -- common/autotest_common.sh@10 -- # set +x 00:06:56.700 ************************************ 00:06:56.700 START TEST unittest_bdev 00:06:56.700 ************************************ 00:06:56.700 16:24:28 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:06:56.700 16:24:28 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:56.700 00:06:56.700 00:06:56.700 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.700 http://cunit.sourceforge.net/ 00:06:56.700 00:06:56.700 00:06:56.700 Suite: bdev 00:06:56.700 Test: bytes_to_blocks_test ...passed 00:06:56.700 Test: num_blocks_test ...passed 00:06:56.700 Test: io_valid_test ...passed 00:06:56.958 Test: open_write_test ...[2024-07-13 16:24:28.171052] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:56.958 [2024-07-13 16:24:28.171402] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:56.958 [2024-07-13 16:24:28.171545] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:56.958 passed 00:06:56.958 Test: claim_test ...passed 00:06:56.958 Test: alias_add_del_test ...[2024-07-13 16:24:28.298176] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:56.958 [2024-07-13 16:24:28.298303] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:56.958 [2024-07-13 16:24:28.298367] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:56.958 passed 00:06:56.958 Test: get_device_stat_test ...passed 00:06:56.958 Test: bdev_io_types_test ...passed 00:06:57.217 Test: bdev_io_wait_test ...passed 00:06:57.217 Test: bdev_io_spans_split_test ...passed 00:06:57.217 Test: bdev_io_boundary_split_test ...passed 00:06:57.217 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-13 16:24:28.512246] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:57.217 passed 00:06:57.217 Test: bdev_io_mix_split_test ...passed 00:06:57.217 Test: bdev_io_split_with_io_wait ...passed 00:06:57.217 Test: bdev_io_write_unit_split_test ...[2024-07-13 16:24:28.642930] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:57.217 [2024-07-13 16:24:28.643015] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:57.217 [2024-07-13 16:24:28.643044] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:57.217 [2024-07-13 16:24:28.643083] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:57.217 passed 00:06:57.475 Test: bdev_io_alignment_with_boundary ...passed 00:06:57.475 Test: bdev_io_alignment ...passed 00:06:57.475 Test: bdev_histograms ...passed 00:06:57.475 Test: bdev_write_zeroes ...passed 00:06:57.475 Test: bdev_compare_and_write ...passed 00:06:57.734 Test: bdev_compare ...passed 00:06:57.734 Test: bdev_compare_emulated ...passed 00:06:57.734 Test: bdev_zcopy_write ...passed 00:06:57.734 Test: bdev_zcopy_read ...passed 00:06:57.734 Test: bdev_open_while_hotremove ...passed 00:06:57.734 Test: bdev_close_while_hotremove ...passed 00:06:57.734 Test: bdev_open_ext_test ...[2024-07-13 16:24:29.137582] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:57.734 passed 00:06:57.734 Test: bdev_open_ext_unregister ...passed 00:06:57.734 Test: bdev_set_io_timeout ...[2024-07-13 16:24:29.137795] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:57.734 passed 00:06:57.994 Test: bdev_set_qd_sampling ...passed 00:06:57.994 Test: lba_range_overlap ...passed 00:06:57.994 Test: lock_lba_range_check_ranges ...passed 00:06:57.994 Test: lock_lba_range_with_io_outstanding ...passed 00:06:57.994 Test: lock_lba_range_overlapped ...passed 00:06:57.994 Test: bdev_quiesce ...[2024-07-13 16:24:29.375748] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:57.994 passed 00:06:57.994 Test: bdev_io_abort ...passed 00:06:58.253 Test: bdev_unmap ...passed 00:06:58.253 Test: bdev_write_zeroes_split_test ...passed 00:06:58.253 Test: bdev_set_options_test ...passed 00:06:58.253 Test: bdev_get_memory_domains ...passed 00:06:58.253 Test: bdev_io_ext ...[2024-07-13 16:24:29.525784] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:58.253 passed 00:06:58.253 Test: bdev_io_ext_no_opts ...passed 00:06:58.253 Test: bdev_io_ext_invalid_opts ...passed 00:06:58.253 Test: bdev_io_ext_split ...passed 00:06:58.512 Test: bdev_io_ext_bounce_buffer ...passed 00:06:58.512 Test: bdev_register_uuid_alias ...[2024-07-13 16:24:29.769441] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 44ee9429-1ec6-4fc3-aa34-951d7d171d6f already exists 00:06:58.512 [2024-07-13 16:24:29.769512] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:44ee9429-1ec6-4fc3-aa34-951d7d171d6f alias for bdev bdev0 00:06:58.512 passed 00:06:58.512 Test: bdev_unregister_by_name ...[2024-07-13 16:24:29.794175] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:58.512 passed 00:06:58.512 Test: for_each_bdev_test ...[2024-07-13 16:24:29.794232] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:58.512 passed 00:06:58.512 Test: bdev_seek_test ...passed 00:06:58.512 Test: bdev_copy ...passed 00:06:58.512 Test: bdev_copy_split_test ...passed 00:06:58.512 Test: examine_locks ...passed 00:06:58.512 Test: claim_v2_rwo ...[2024-07-13 16:24:29.927365] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.927422] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.927442] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:58.512 passed 00:06:58.512 Test: claim_v2_rom ...[2024-07-13 16:24:29.927491] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.927505] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.927549] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:58.512 [2024-07-13 16:24:29.927661] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.927701] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.927717] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.927738] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:58.512 passed 00:06:58.512 Test: claim_v2_rwm ...[2024-07-13 16:24:29.927785] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:58.512 [2024-07-13 16:24:29.927814] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:58.512 [2024-07-13 16:24:29.927897] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:58.512 [2024-07-13 16:24:29.927940] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.927977] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.928000] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.928015] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.928037] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:58.512 passed 00:06:58.512 Test: claim_v2_existing_writer ...passed 00:06:58.512 Test: claim_v2_existing_v1 ...[2024-07-13 16:24:29.928067] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:58.512 [2024-07-13 16:24:29.928170] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:58.512 [2024-07-13 16:24:29.928192] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:58.512 [2024-07-13 16:24:29.928282] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.928305] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.928320] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:58.512 passed 00:06:58.512 Test: claim_v1_existing_v2 ...[2024-07-13 16:24:29.928420] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.928457] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:58.512 [2024-07-13 16:24:29.928483] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:58.512 passed 00:06:58.512 Test: examine_claimed ...passed 00:06:58.512 00:06:58.512 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.512 suites 1 1 n/a 0 0 00:06:58.512 tests 59 59 59 0 0 00:06:58.512 asserts 4599 4599 4599 0 n/a 00:06:58.512 00:06:58.512 Elapsed time = 1.855 seconds 00:06:58.512 [2024-07-13 16:24:29.928691] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:58.512 16:24:29 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:58.771 00:06:58.771 00:06:58.771 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.771 http://cunit.sourceforge.net/ 00:06:58.771 00:06:58.771 00:06:58.771 Suite: nvme 00:06:58.771 Test: test_create_ctrlr ...passed 00:06:58.771 Test: test_reset_ctrlr ...[2024-07-13 16:24:29.988673] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.771 passed 00:06:58.771 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:58.771 Test: test_failover_ctrlr ...passed 00:06:58.771 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-13 16:24:29.990724] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.771 [2024-07-13 16:24:29.990895] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.771 passed 00:06:58.771 Test: test_pending_reset ...[2024-07-13 16:24:29.991043] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.771 [2024-07-13 16:24:29.992439] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.771 [2024-07-13 16:24:29.992614] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.771 passed 00:06:58.771 Test: test_attach_ctrlr ...[2024-07-13 16:24:29.993480] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:58.771 passed 00:06:58.771 Test: test_aer_cb ...passed 00:06:58.771 Test: test_submit_nvme_cmd ...passed 00:06:58.771 Test: test_add_remove_trid ...passed 00:06:58.771 Test: test_abort ...[2024-07-13 16:24:29.996106] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:58.771 passed 00:06:58.771 Test: test_get_io_qpair ...passed 00:06:58.771 Test: test_bdev_unregister ...passed 00:06:58.772 Test: test_compare_ns ...passed 00:06:58.772 Test: test_init_ana_log_page ...passed 00:06:58.772 Test: test_get_memory_domains ...passed 00:06:58.772 Test: test_reconnect_qpair ...[2024-07-13 16:24:29.998330] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 passed 00:06:58.772 Test: test_create_bdev_ctrlr ...[2024-07-13 16:24:29.998727] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:58.772 passed 00:06:58.772 Test: test_add_multi_ns_to_bdev ...[2024-07-13 16:24:29.999749] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:58.772 passed 00:06:58.772 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:58.772 Test: test_admin_path ...passed 00:06:58.772 Test: test_reset_bdev_ctrlr ...passed 00:06:58.772 Test: test_find_io_path ...passed 00:06:58.772 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:58.772 Test: test_retry_io_for_io_path_error ...passed 00:06:58.772 Test: test_retry_io_count ...passed 00:06:58.772 Test: test_concurrent_read_ana_log_page ...passed 00:06:58.772 Test: test_retry_io_for_ana_error ...passed 00:06:58.772 Test: test_check_io_error_resiliency_params ...[2024-07-13 16:24:30.005478] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:58.772 [2024-07-13 16:24:30.005563] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:58.772 [2024-07-13 16:24:30.005610] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:58.772 [2024-07-13 16:24:30.005667] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:58.772 [2024-07-13 16:24:30.005687] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:58.772 [2024-07-13 16:24:30.005723] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:58.772 passed 00:06:58.772 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-13 16:24:30.005743] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:58.772 [2024-07-13 16:24:30.005787] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:58.772 [2024-07-13 16:24:30.005821] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:58.772 passed 00:06:58.772 Test: test_reconnect_ctrlr ...[2024-07-13 16:24:30.006513] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 [2024-07-13 16:24:30.006655] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 [2024-07-13 16:24:30.006870] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 [2024-07-13 16:24:30.006965] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 passed 00:06:58.772 Test: test_retry_failover_ctrlr ...[2024-07-13 16:24:30.007067] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 [2024-07-13 16:24:30.007337] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 passed 00:06:58.772 Test: test_fail_path ...[2024-07-13 16:24:30.007733] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 [2024-07-13 16:24:30.007855] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 [2024-07-13 16:24:30.007930] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 [2024-07-13 16:24:30.008022] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 [2024-07-13 16:24:30.008107] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 passed 00:06:58.772 Test: test_nvme_ns_cmp ...passed 00:06:58.772 Test: test_ana_transition ...passed 00:06:58.772 Test: test_set_preferred_path ...passed 00:06:58.772 Test: test_find_next_io_path ...passed 00:06:58.772 Test: test_find_io_path_min_qd ...passed 00:06:58.772 Test: test_disable_auto_failback ...[2024-07-13 16:24:30.009340] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 passed 00:06:58.772 Test: test_set_multipath_policy ...passed 00:06:58.772 Test: test_uuid_generation ...passed 00:06:58.772 Test: test_retry_io_to_same_path ...passed 00:06:58.772 Test: test_race_between_reset_and_disconnected ...passed 00:06:58.772 Test: test_ctrlr_op_rpc ...passed 00:06:58.772 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:58.772 Test: test_disable_enable_ctrlr ...[2024-07-13 16:24:30.012684] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 [2024-07-13 16:24:30.012944] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:58.772 passed 00:06:58.772 Test: test_delete_ctrlr_done ...passed 00:06:58.772 Test: test_ns_remove_during_reset ...passed 00:06:58.772 00:06:58.772 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.772 suites 1 1 n/a 0 0 00:06:58.772 tests 48 48 48 0 0 00:06:58.772 asserts 3553 3553 3553 0 n/a 00:06:58.772 00:06:58.772 Elapsed time = 0.028 seconds 00:06:58.772 16:24:30 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:58.772 Test Options 00:06:58.772 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:58.772 00:06:58.772 00:06:58.772 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.772 http://cunit.sourceforge.net/ 00:06:58.772 00:06:58.772 00:06:58.772 Suite: raid 00:06:58.772 Test: test_create_raid ...passed 00:06:58.772 Test: test_create_raid_superblock ...passed 00:06:58.772 Test: test_delete_raid ...passed 00:06:58.772 Test: test_create_raid_invalid_args ...[2024-07-13 16:24:30.069735] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:58.772 [2024-07-13 16:24:30.070240] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:58.772 [2024-07-13 16:24:30.070795] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:58.772 [2024-07-13 16:24:30.071086] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:58.772 [2024-07-13 16:24:30.072011] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:58.772 passed 00:06:58.772 Test: test_delete_raid_invalid_args ...passed 00:06:58.772 Test: test_io_channel ...passed 00:06:58.772 Test: test_reset_io ...passed 00:06:58.772 Test: test_write_io ...passed 00:06:58.772 Test: test_read_io ...passed 00:07:00.149 Test: test_unmap_io ...passed 00:07:00.149 Test: test_io_failure ...[2024-07-13 16:24:31.228519] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:07:00.149 passed 00:07:00.149 Test: test_multi_raid_no_io ...passed 00:07:00.149 Test: test_multi_raid_with_io ...passed 00:07:00.149 Test: test_io_type_supported ...passed 00:07:00.149 Test: test_raid_json_dump_info ...passed 00:07:00.149 Test: test_context_size ...passed 00:07:00.149 Test: test_raid_level_conversions ...passed 00:07:00.149 Test: test_raid_process ...passed 00:07:00.149 Test: test_raid_io_split ...passed 00:07:00.149 00:07:00.149 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.149 suites 1 1 n/a 0 0 00:07:00.149 tests 19 19 19 0 0 00:07:00.149 asserts 177879 177879 177879 0 n/a 00:07:00.149 00:07:00.149 Elapsed time = 1.167 seconds 00:07:00.149 16:24:31 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:07:00.149 00:07:00.149 00:07:00.149 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.149 http://cunit.sourceforge.net/ 00:07:00.149 00:07:00.149 00:07:00.149 Suite: raid_sb 00:07:00.149 Test: test_raid_bdev_write_superblock ...passed 00:07:00.149 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:07:00.149 Test: test_raid_bdev_parse_superblock ...[2024-07-13 16:24:31.283348] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:07:00.149 passed 00:07:00.149 00:07:00.149 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.149 suites 1 1 n/a 0 0 00:07:00.149 tests 3 3 3 0 0 00:07:00.149 asserts 32 32 32 0 n/a 00:07:00.149 00:07:00.149 Elapsed time = 0.001 seconds 00:07:00.149 16:24:31 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:07:00.149 00:07:00.149 00:07:00.149 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.149 http://cunit.sourceforge.net/ 00:07:00.149 00:07:00.149 00:07:00.149 Suite: concat 00:07:00.149 Test: test_concat_start ...passed 00:07:00.149 Test: test_concat_rw ...passed 00:07:00.149 Test: test_concat_null_payload ...passed 00:07:00.149 00:07:00.149 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.149 suites 1 1 n/a 0 0 00:07:00.149 tests 3 3 3 0 0 00:07:00.149 asserts 8097 8097 8097 0 n/a 00:07:00.149 00:07:00.149 Elapsed time = 0.005 seconds 00:07:00.149 16:24:31 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:07:00.149 00:07:00.149 00:07:00.149 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.149 http://cunit.sourceforge.net/ 00:07:00.149 00:07:00.149 00:07:00.149 Suite: raid1 00:07:00.149 Test: test_raid1_start ...passed 00:07:00.149 Test: test_raid1_read_balancing ...passed 00:07:00.149 00:07:00.149 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.149 suites 1 1 n/a 0 0 00:07:00.149 tests 2 2 2 0 0 00:07:00.149 asserts 2856 2856 2856 0 n/a 00:07:00.149 00:07:00.149 Elapsed time = 0.003 seconds 00:07:00.149 16:24:31 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:07:00.149 00:07:00.149 00:07:00.149 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.149 http://cunit.sourceforge.net/ 00:07:00.149 00:07:00.149 00:07:00.149 Suite: zone 00:07:00.149 Test: test_zone_get_operation ...passed 00:07:00.149 Test: test_bdev_zone_get_info ...passed 00:07:00.149 Test: test_bdev_zone_management ...passed 00:07:00.149 Test: test_bdev_zone_append ...passed 00:07:00.149 Test: test_bdev_zone_append_with_md ...passed 00:07:00.149 Test: test_bdev_zone_appendv ...passed 00:07:00.149 Test: test_bdev_zone_appendv_with_md ...passed 00:07:00.149 Test: test_bdev_io_get_append_location ...passed 00:07:00.149 00:07:00.149 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.149 suites 1 1 n/a 0 0 00:07:00.149 tests 8 8 8 0 0 00:07:00.149 asserts 94 94 94 0 n/a 00:07:00.149 00:07:00.149 Elapsed time = 0.000 seconds 00:07:00.149 16:24:31 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:07:00.149 00:07:00.149 00:07:00.149 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.149 http://cunit.sourceforge.net/ 00:07:00.149 00:07:00.149 00:07:00.149 Suite: gpt_parse 00:07:00.150 Test: test_parse_mbr_and_primary ...[2024-07-13 16:24:31.429337] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:00.150 [2024-07-13 16:24:31.429588] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:00.150 [2024-07-13 16:24:31.429630] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:00.150 [2024-07-13 16:24:31.429716] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:00.150 [2024-07-13 16:24:31.429763] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:00.150 [2024-07-13 16:24:31.429843] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:00.150 passed 00:07:00.150 Test: test_parse_secondary ...passed 00:07:00.150 Test: test_check_mbr ...passed 00:07:00.150 Test: test_read_header ...passed 00:07:00.150 Test: test_read_partitions ...passed 00:07:00.150 00:07:00.150 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.150 suites 1 1 n/a 0 0 00:07:00.150 tests 5 5 5 0 0 00:07:00.150 asserts 33 33 33 0 n/a 00:07:00.150 00:07:00.150 Elapsed time = 0.003 seconds 00:07:00.150 [2024-07-13 16:24:31.430363] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:00.150 [2024-07-13 16:24:31.430410] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:00.150 [2024-07-13 16:24:31.430446] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:00.150 [2024-07-13 16:24:31.430481] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:00.150 [2024-07-13 16:24:31.430957] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:00.150 [2024-07-13 16:24:31.430998] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:00.150 [2024-07-13 16:24:31.431051] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:07:00.150 [2024-07-13 16:24:31.431138] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:07:00.150 [2024-07-13 16:24:31.431205] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:07:00.150 [2024-07-13 16:24:31.431244] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:07:00.150 [2024-07-13 16:24:31.431280] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:07:00.150 [2024-07-13 16:24:31.431314] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:07:00.150 [2024-07-13 16:24:31.431366] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:07:00.150 [2024-07-13 16:24:31.431411] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:07:00.150 [2024-07-13 16:24:31.431445] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:07:00.150 [2024-07-13 16:24:31.431476] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:07:00.150 [2024-07-13 16:24:31.431726] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:07:00.150 16:24:31 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:07:00.150 00:07:00.150 00:07:00.150 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.150 http://cunit.sourceforge.net/ 00:07:00.150 00:07:00.150 00:07:00.150 Suite: bdev_part 00:07:00.150 Test: part_test ...[2024-07-13 16:24:31.467257] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:07:00.150 passed 00:07:00.150 Test: part_free_test ...passed 00:07:00.150 Test: part_get_io_channel_test ...passed 00:07:00.150 Test: part_construct_ext ...passed 00:07:00.150 00:07:00.150 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.150 suites 1 1 n/a 0 0 00:07:00.150 tests 4 4 4 0 0 00:07:00.150 asserts 48 48 48 0 n/a 00:07:00.150 00:07:00.150 Elapsed time = 0.045 seconds 00:07:00.150 16:24:31 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:07:00.150 00:07:00.150 00:07:00.150 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.150 http://cunit.sourceforge.net/ 00:07:00.150 00:07:00.150 00:07:00.150 Suite: scsi_nvme_suite 00:07:00.150 Test: scsi_nvme_translate_test ...passed 00:07:00.150 00:07:00.150 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.150 suites 1 1 n/a 0 0 00:07:00.150 tests 1 1 1 0 0 00:07:00.150 asserts 104 104 104 0 n/a 00:07:00.150 00:07:00.150 Elapsed time = 0.000 seconds 00:07:00.150 16:24:31 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:07:00.150 00:07:00.150 00:07:00.150 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.150 http://cunit.sourceforge.net/ 00:07:00.150 00:07:00.150 00:07:00.150 Suite: lvol 00:07:00.150 Test: ut_lvs_init ...[2024-07-13 16:24:31.590716] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:07:00.150 passed 00:07:00.150 Test: ut_lvol_init ...passed 00:07:00.150 Test: ut_lvol_snapshot ...passed 00:07:00.150 Test: ut_lvol_clone ...[2024-07-13 16:24:31.591334] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:07:00.150 passed 00:07:00.150 Test: ut_lvs_destroy ...passed 00:07:00.150 Test: ut_lvs_unload ...passed 00:07:00.150 Test: ut_lvol_resize ...[2024-07-13 16:24:31.593316] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:07:00.150 passed 00:07:00.150 Test: ut_lvol_set_read_only ...passed 00:07:00.150 Test: ut_lvol_hotremove ...passed 00:07:00.150 Test: ut_vbdev_lvol_get_io_channel ...passed 00:07:00.150 Test: ut_vbdev_lvol_io_type_supported ...passed 00:07:00.150 Test: ut_lvol_read_write ...passed 00:07:00.150 Test: ut_vbdev_lvol_submit_request ...passed 00:07:00.150 Test: ut_lvol_examine_config ...passed 00:07:00.150 Test: ut_lvol_examine_disk ...[2024-07-13 16:24:31.594356] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:07:00.150 passed 00:07:00.150 Test: ut_lvol_rename ...[2024-07-13 16:24:31.595714] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:07:00.150 [2024-07-13 16:24:31.595860] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:07:00.150 passed 00:07:00.150 Test: ut_bdev_finish ...passed 00:07:00.150 Test: ut_lvs_rename ...passed 00:07:00.150 Test: ut_lvol_seek ...passed 00:07:00.150 Test: ut_esnap_dev_create ...[2024-07-13 16:24:31.596811] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:07:00.150 [2024-07-13 16:24:31.596909] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:07:00.150 [2024-07-13 16:24:31.596937] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:07:00.150 [2024-07-13 16:24:31.596994] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:07:00.150 passed 00:07:00.150 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-13 16:24:31.597203] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:07:00.150 [2024-07-13 16:24:31.597252] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:07:00.150 passed 00:07:00.150 00:07:00.150 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.150 suites 1 1 n/a 0 0 00:07:00.150 tests 21 21 21 0 0 00:07:00.150 asserts 712 712 712 0 n/a 00:07:00.150 00:07:00.150 Elapsed time = 0.007 seconds 00:07:00.410 16:24:31 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:07:00.410 00:07:00.410 00:07:00.410 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.410 http://cunit.sourceforge.net/ 00:07:00.410 00:07:00.410 00:07:00.410 Suite: zone_block 00:07:00.410 Test: test_zone_block_create ...passed 00:07:00.410 Test: test_zone_block_create_invalid ...[2024-07-13 16:24:31.660382] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:07:00.410 [2024-07-13 16:24:31.660643] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-13 16:24:31.660787] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:07:00.410 [2024-07-13 16:24:31.660845] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-13 16:24:31.660974] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:07:00.410 [2024-07-13 16:24:31.661006] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-13 16:24:31.661088] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:07:00.410 passed 00:07:00.410 Test: test_get_zone_info ...[2024-07-13 16:24:31.661132] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-13 16:24:31.661608] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.661671] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.661724] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 passed 00:07:00.410 Test: test_supported_io_types ...passed 00:07:00.410 Test: test_reset_zone ...[2024-07-13 16:24:31.662376] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 passed 00:07:00.410 Test: test_open_zone ...[2024-07-13 16:24:31.662424] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.662786] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.663302] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.663370] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 passed 00:07:00.410 Test: test_zone_write ...[2024-07-13 16:24:31.663726] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:00.410 [2024-07-13 16:24:31.663775] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.663825] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:00.410 [2024-07-13 16:24:31.663866] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.668738] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:07:00.410 [2024-07-13 16:24:31.668794] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.668862] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:07:00.410 [2024-07-13 16:24:31.668889] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.673688] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:00.410 [2024-07-13 16:24:31.673748] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 passed 00:07:00.410 Test: test_zone_read ...[2024-07-13 16:24:31.674152] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:07:00.410 [2024-07-13 16:24:31.674188] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.674248] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:07:00.410 [2024-07-13 16:24:31.674280] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.674640] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:07:00.410 [2024-07-13 16:24:31.674668] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 passed 00:07:00.410 Test: test_close_zone ...[2024-07-13 16:24:31.674945] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.675010] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.675205] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.675235] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 passed 00:07:00.410 Test: test_finish_zone ...[2024-07-13 16:24:31.675706] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 passed 00:07:00.410 Test: test_append_zone ...[2024-07-13 16:24:31.675755] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.676055] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:00.410 [2024-07-13 16:24:31.676089] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.676143] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:00.410 [2024-07-13 16:24:31.676159] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 [2024-07-13 16:24:31.685499] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:00.410 [2024-07-13 16:24:31.685557] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:00.410 passed 00:07:00.411 00:07:00.411 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.411 suites 1 1 n/a 0 0 00:07:00.411 tests 11 11 11 0 0 00:07:00.411 asserts 3437 3437 3437 0 n/a 00:07:00.411 00:07:00.411 Elapsed time = 0.026 seconds 00:07:00.411 16:24:31 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:07:00.411 00:07:00.411 00:07:00.411 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.411 http://cunit.sourceforge.net/ 00:07:00.411 00:07:00.411 00:07:00.411 Suite: bdev 00:07:00.411 Test: basic ...[2024-07-13 16:24:31.813241] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55858dffa401): Operation not permitted (rc=-1) 00:07:00.411 [2024-07-13 16:24:31.813608] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55858dffa3c0): Operation not permitted (rc=-1) 00:07:00.411 [2024-07-13 16:24:31.813680] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55858dffa401): Operation not permitted (rc=-1) 00:07:00.411 passed 00:07:00.670 Test: unregister_and_close ...passed 00:07:00.670 Test: unregister_and_close_different_threads ...passed 00:07:00.670 Test: basic_qos ...passed 00:07:00.670 Test: put_channel_during_reset ...passed 00:07:00.670 Test: aborted_reset ...passed 00:07:00.929 Test: aborted_reset_no_outstanding_io ...passed 00:07:00.929 Test: io_during_reset ...passed 00:07:00.929 Test: reset_completions ...passed 00:07:00.929 Test: io_during_qos_queue ...passed 00:07:00.929 Test: io_during_qos_reset ...passed 00:07:01.188 Test: enomem ...passed 00:07:01.188 Test: enomem_multi_bdev ...passed 00:07:01.188 Test: enomem_multi_bdev_unregister ...passed 00:07:01.188 Test: enomem_multi_io_target ...passed 00:07:01.188 Test: qos_dynamic_enable ...passed 00:07:01.447 Test: bdev_histograms_mt ...passed 00:07:01.447 Test: bdev_set_io_timeout_mt ...[2024-07-13 16:24:32.767319] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:07:01.447 passed 00:07:01.447 Test: lock_lba_range_then_submit_io ...[2024-07-13 16:24:32.790880] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x55858dffa380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:07:01.447 passed 00:07:01.447 Test: unregister_during_reset ...passed 00:07:01.705 Test: event_notify_and_close ...passed 00:07:01.705 Test: unregister_and_qos_poller ...passed 00:07:01.705 Suite: bdev_wrong_thread 00:07:01.705 Test: spdk_bdev_register_wt ...[2024-07-13 16:24:32.983216] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:07:01.705 passed 00:07:01.705 Test: spdk_bdev_examine_wt ...passed[2024-07-13 16:24:32.983512] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:07:01.705 00:07:01.705 00:07:01.705 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.705 suites 2 2 n/a 0 0 00:07:01.705 tests 24 24 24 0 0 00:07:01.705 asserts 621 621 621 0 n/a 00:07:01.705 00:07:01.705 Elapsed time = 1.205 seconds 00:07:01.705 00:07:01.705 real 0m4.977s 00:07:01.705 user 0m2.103s 00:07:01.705 sys 0m2.873s 00:07:01.705 16:24:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.705 16:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:01.705 ************************************ 00:07:01.705 END TEST unittest_bdev 00:07:01.705 ************************************ 00:07:01.705 16:24:33 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:01.705 16:24:33 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:01.705 16:24:33 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:01.705 16:24:33 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:01.705 16:24:33 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:01.705 16:24:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:01.705 16:24:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.705 16:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:01.705 ************************************ 00:07:01.705 START TEST unittest_bdev_raid5f 00:07:01.705 ************************************ 00:07:01.705 16:24:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:01.705 00:07:01.705 00:07:01.705 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.705 http://cunit.sourceforge.net/ 00:07:01.705 00:07:01.705 00:07:01.705 Suite: raid5f 00:07:01.705 Test: test_raid5f_start ...passed 00:07:02.273 Test: test_raid5f_submit_read_request ...passed 00:07:02.531 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:07:05.849 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:07:23.932 Test: test_raid5f_chunk_write_error ...passed 00:07:30.491 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:07:33.020 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:59.555 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:59.555 00:07:59.555 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.555 suites 1 1 n/a 0 0 00:07:59.555 tests 8 8 8 0 0 00:07:59.555 asserts 351864 351864 351864 0 n/a 00:07:59.555 00:07:59.555 Elapsed time = 54.702 seconds 00:07:59.555 00:07:59.555 real 0m54.812s 00:07:59.555 user 0m50.874s 00:07:59.555 sys 0m3.933s 00:07:59.555 16:25:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.555 16:25:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.555 ************************************ 00:07:59.555 END TEST unittest_bdev_raid5f 00:07:59.555 ************************************ 00:07:59.555 16:25:27 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:07:59.555 16:25:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:59.555 16:25:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.555 16:25:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.555 ************************************ 00:07:59.555 START TEST unittest_blob_blobfs 00:07:59.555 ************************************ 00:07:59.555 16:25:27 -- common/autotest_common.sh@1104 -- # unittest_blob 00:07:59.555 16:25:27 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:59.555 16:25:27 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:59.555 00:07:59.555 00:07:59.555 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.555 http://cunit.sourceforge.net/ 00:07:59.555 00:07:59.555 00:07:59.555 Suite: blob_nocopy_noextent 00:07:59.555 Test: blob_init ...[2024-07-13 16:25:28.042367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:59.555 passed 00:07:59.555 Test: blob_thin_provision ...passed 00:07:59.555 Test: blob_read_only ...passed 00:07:59.555 Test: bs_load ...[2024-07-13 16:25:28.196717] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:59.555 passed 00:07:59.555 Test: bs_load_custom_cluster_size ...passed 00:07:59.555 Test: bs_load_after_failed_grow ...passed 00:07:59.555 Test: bs_cluster_sz ...[2024-07-13 16:25:28.247241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:59.555 [2024-07-13 16:25:28.247838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:59.555 [2024-07-13 16:25:28.248078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:59.555 passed 00:07:59.555 Test: bs_resize_md ...passed 00:07:59.555 Test: bs_destroy ...passed 00:07:59.555 Test: bs_type ...passed 00:07:59.555 Test: bs_super_block ...passed 00:07:59.555 Test: bs_test_recover_cluster_count ...passed 00:07:59.555 Test: bs_grow_live ...passed 00:07:59.555 Test: bs_grow_live_no_space ...passed 00:07:59.555 Test: bs_test_grow ...passed 00:07:59.555 Test: blob_serialize_test ...passed 00:07:59.555 Test: super_block_crc ...passed 00:07:59.555 Test: blob_thin_prov_write_count_io ...passed 00:07:59.555 Test: bs_load_iter_test ...passed 00:07:59.555 Test: blob_relations ...[2024-07-13 16:25:28.512892] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:59.555 [2024-07-13 16:25:28.513036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.555 [2024-07-13 16:25:28.513943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:59.555 [2024-07-13 16:25:28.514015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.555 passed 00:07:59.555 Test: blob_relations2 ...[2024-07-13 16:25:28.536826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:59.555 [2024-07-13 16:25:28.536939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.555 [2024-07-13 16:25:28.536982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:59.555 [2024-07-13 16:25:28.537010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.555 [2024-07-13 16:25:28.538318] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:59.555 [2024-07-13 16:25:28.538377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.555 [2024-07-13 16:25:28.538817] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:59.555 [2024-07-13 16:25:28.538867] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.555 passed 00:07:59.555 Test: blob_relations3 ...passed 00:07:59.555 Test: blobstore_clean_power_failure ...passed 00:07:59.555 Test: blob_delete_snapshot_power_failure ...[2024-07-13 16:25:28.828969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:59.555 [2024-07-13 16:25:28.851862] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:59.555 [2024-07-13 16:25:28.851999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:59.555 [2024-07-13 16:25:28.852046] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.555 [2024-07-13 16:25:28.874467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:59.555 [2024-07-13 16:25:28.874576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:59.555 [2024-07-13 16:25:28.874632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:59.555 [2024-07-13 16:25:28.874682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.555 [2024-07-13 16:25:28.896445] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:59.555 [2024-07-13 16:25:28.896602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.555 [2024-07-13 16:25:28.918099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:59.555 [2024-07-13 16:25:28.918249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.555 [2024-07-13 16:25:28.939504] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:59.555 [2024-07-13 16:25:28.939673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.555 passed 00:07:59.555 Test: blob_create_snapshot_power_failure ...[2024-07-13 16:25:29.006357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:59.555 [2024-07-13 16:25:29.049250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:59.555 [2024-07-13 16:25:29.070595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:59.555 passed 00:07:59.555 Test: blob_io_unit ...passed 00:07:59.555 Test: blob_io_unit_compatibility ...passed 00:07:59.555 Test: blob_ext_md_pages ...passed 00:07:59.555 Test: blob_esnap_io_4096_4096 ...passed 00:07:59.555 Test: blob_esnap_io_512_512 ...passed 00:07:59.555 Test: blob_esnap_io_4096_512 ...passed 00:07:59.555 Test: blob_esnap_io_512_4096 ...passed 00:07:59.555 Suite: blob_bs_nocopy_noextent 00:07:59.555 Test: blob_open ...passed 00:07:59.555 Test: blob_create ...[2024-07-13 16:25:29.481895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:59.555 passed 00:07:59.555 Test: blob_create_loop ...passed 00:07:59.555 Test: blob_create_fail ...[2024-07-13 16:25:29.636770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:59.555 passed 00:07:59.555 Test: blob_create_internal ...passed 00:07:59.555 Test: blob_create_zero_extent ...passed 00:07:59.555 Test: blob_snapshot ...passed 00:07:59.555 Test: blob_clone ...passed 00:07:59.555 Test: blob_inflate ...[2024-07-13 16:25:29.966360] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:59.555 passed 00:07:59.555 Test: blob_delete ...passed 00:07:59.555 Test: blob_resize_test ...[2024-07-13 16:25:30.082081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:59.555 passed 00:07:59.555 Test: channel_ops ...passed 00:07:59.555 Test: blob_super ...passed 00:07:59.555 Test: blob_rw_verify_iov ...passed 00:07:59.555 Test: blob_unmap ...passed 00:07:59.555 Test: blob_iter ...passed 00:07:59.555 Test: blob_parse_md ...passed 00:07:59.555 Test: bs_load_pending_removal ...passed 00:07:59.555 Test: bs_unload ...[2024-07-13 16:25:30.558155] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:59.555 passed 00:07:59.555 Test: bs_usable_clusters ...passed 00:07:59.555 Test: blob_crc ...[2024-07-13 16:25:30.677166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:59.555 [2024-07-13 16:25:30.677366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:59.555 passed 00:07:59.555 Test: blob_flags ...passed 00:07:59.555 Test: bs_version ...passed 00:07:59.555 Test: blob_set_xattrs_test ...[2024-07-13 16:25:30.854307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:59.555 [2024-07-13 16:25:30.854436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:59.555 passed 00:07:59.815 Test: blob_thin_prov_alloc ...passed 00:07:59.815 Test: blob_insert_cluster_msg_test ...passed 00:07:59.815 Test: blob_thin_prov_rw ...passed 00:07:59.815 Test: blob_thin_prov_rle ...passed 00:07:59.815 Test: blob_thin_prov_rw_iov ...passed 00:08:00.074 Test: blob_snapshot_rw ...passed 00:08:00.075 Test: blob_snapshot_rw_iov ...passed 00:08:00.333 Test: blob_inflate_rw ...passed 00:08:00.333 Test: blob_snapshot_freeze_io ...passed 00:08:00.589 Test: blob_operation_split_rw ...passed 00:08:00.589 Test: blob_operation_split_rw_iov ...passed 00:08:00.846 Test: blob_simultaneous_operations ...[2024-07-13 16:25:32.073538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:00.846 [2024-07-13 16:25:32.073673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.846 [2024-07-13 16:25:32.075146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:00.846 [2024-07-13 16:25:32.075214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.846 [2024-07-13 16:25:32.089734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:00.846 [2024-07-13 16:25:32.089825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.846 [2024-07-13 16:25:32.089968] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:00.846 [2024-07-13 16:25:32.090004] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.846 passed 00:08:00.846 Test: blob_persist_test ...passed 00:08:00.846 Test: blob_decouple_snapshot ...passed 00:08:01.104 Test: blob_seek_io_unit ...passed 00:08:01.104 Test: blob_nested_freezes ...passed 00:08:01.104 Suite: blob_blob_nocopy_noextent 00:08:01.104 Test: blob_write ...passed 00:08:01.104 Test: blob_read ...passed 00:08:01.363 Test: blob_rw_verify ...passed 00:08:01.363 Test: blob_rw_verify_iov_nomem ...passed 00:08:01.363 Test: blob_rw_iov_read_only ...passed 00:08:01.363 Test: blob_xattr ...passed 00:08:01.621 Test: blob_dirty_shutdown ...passed 00:08:01.621 Test: blob_is_degraded ...passed 00:08:01.621 Suite: blob_esnap_bs_nocopy_noextent 00:08:01.621 Test: blob_esnap_create ...passed 00:08:01.621 Test: blob_esnap_thread_add_remove ...passed 00:08:01.621 Test: blob_esnap_clone_snapshot ...passed 00:08:01.878 Test: blob_esnap_clone_inflate ...passed 00:08:01.878 Test: blob_esnap_clone_decouple ...passed 00:08:01.878 Test: blob_esnap_clone_reload ...passed 00:08:01.878 Test: blob_esnap_hotplug ...passed 00:08:01.878 Suite: blob_nocopy_extent 00:08:01.878 Test: blob_init ...[2024-07-13 16:25:33.327452] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:01.878 passed 00:08:02.135 Test: blob_thin_provision ...passed 00:08:02.135 Test: blob_read_only ...passed 00:08:02.135 Test: bs_load ...[2024-07-13 16:25:33.408229] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:02.135 passed 00:08:02.135 Test: bs_load_custom_cluster_size ...passed 00:08:02.135 Test: bs_load_after_failed_grow ...passed 00:08:02.135 Test: bs_cluster_sz ...[2024-07-13 16:25:33.451509] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:02.135 [2024-07-13 16:25:33.451825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:02.135 [2024-07-13 16:25:33.451878] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:02.135 passed 00:08:02.135 Test: bs_resize_md ...passed 00:08:02.135 Test: bs_destroy ...passed 00:08:02.135 Test: bs_type ...passed 00:08:02.135 Test: bs_super_block ...passed 00:08:02.135 Test: bs_test_recover_cluster_count ...passed 00:08:02.135 Test: bs_grow_live ...passed 00:08:02.135 Test: bs_grow_live_no_space ...passed 00:08:02.135 Test: bs_test_grow ...passed 00:08:02.394 Test: blob_serialize_test ...passed 00:08:02.394 Test: super_block_crc ...passed 00:08:02.394 Test: blob_thin_prov_write_count_io ...passed 00:08:02.394 Test: bs_load_iter_test ...passed 00:08:02.394 Test: blob_relations ...[2024-07-13 16:25:33.711508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:02.394 [2024-07-13 16:25:33.711657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.394 [2024-07-13 16:25:33.712586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:02.394 [2024-07-13 16:25:33.712658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.394 passed 00:08:02.394 Test: blob_relations2 ...[2024-07-13 16:25:33.736989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:02.394 [2024-07-13 16:25:33.737114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.394 [2024-07-13 16:25:33.737147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:02.394 [2024-07-13 16:25:33.737180] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.394 [2024-07-13 16:25:33.738569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:02.394 [2024-07-13 16:25:33.738630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.394 [2024-07-13 16:25:33.739011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:02.394 [2024-07-13 16:25:33.739060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.394 passed 00:08:02.394 Test: blob_relations3 ...passed 00:08:02.652 Test: blobstore_clean_power_failure ...passed 00:08:02.652 Test: blob_delete_snapshot_power_failure ...[2024-07-13 16:25:34.015772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:02.652 [2024-07-13 16:25:34.036706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:02.652 [2024-07-13 16:25:34.057847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:02.652 [2024-07-13 16:25:34.057961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:02.652 [2024-07-13 16:25:34.057996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.652 [2024-07-13 16:25:34.078862] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:02.652 [2024-07-13 16:25:34.078976] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:02.652 [2024-07-13 16:25:34.079016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:02.652 [2024-07-13 16:25:34.079048] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.652 [2024-07-13 16:25:34.099837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:02.652 [2024-07-13 16:25:34.099969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:02.652 [2024-07-13 16:25:34.100006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:02.653 [2024-07-13 16:25:34.100060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.653 [2024-07-13 16:25:34.120952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:02.653 [2024-07-13 16:25:34.121099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.911 [2024-07-13 16:25:34.141884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:02.911 [2024-07-13 16:25:34.142031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.911 [2024-07-13 16:25:34.162980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:02.911 [2024-07-13 16:25:34.163130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.911 passed 00:08:02.911 Test: blob_create_snapshot_power_failure ...[2024-07-13 16:25:34.225898] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:02.911 [2024-07-13 16:25:34.246528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:02.911 [2024-07-13 16:25:34.287222] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:02.911 [2024-07-13 16:25:34.307897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:02.911 passed 00:08:03.169 Test: blob_io_unit ...passed 00:08:03.169 Test: blob_io_unit_compatibility ...passed 00:08:03.169 Test: blob_ext_md_pages ...passed 00:08:03.169 Test: blob_esnap_io_4096_4096 ...passed 00:08:03.169 Test: blob_esnap_io_512_512 ...passed 00:08:03.169 Test: blob_esnap_io_4096_512 ...passed 00:08:03.169 Test: blob_esnap_io_512_4096 ...passed 00:08:03.169 Suite: blob_bs_nocopy_extent 00:08:03.427 Test: blob_open ...passed 00:08:03.427 Test: blob_create ...[2024-07-13 16:25:34.705144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:03.427 passed 00:08:03.427 Test: blob_create_loop ...passed 00:08:03.427 Test: blob_create_fail ...[2024-07-13 16:25:34.862303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:03.427 passed 00:08:03.686 Test: blob_create_internal ...passed 00:08:03.686 Test: blob_create_zero_extent ...passed 00:08:03.686 Test: blob_snapshot ...passed 00:08:03.686 Test: blob_clone ...passed 00:08:03.944 Test: blob_inflate ...[2024-07-13 16:25:35.197286] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:03.944 passed 00:08:03.944 Test: blob_delete ...passed 00:08:03.944 Test: blob_resize_test ...[2024-07-13 16:25:35.311959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:03.944 passed 00:08:03.944 Test: channel_ops ...passed 00:08:04.202 Test: blob_super ...passed 00:08:04.202 Test: blob_rw_verify_iov ...passed 00:08:04.202 Test: blob_unmap ...passed 00:08:04.202 Test: blob_iter ...passed 00:08:04.461 Test: blob_parse_md ...passed 00:08:04.461 Test: bs_load_pending_removal ...passed 00:08:04.461 Test: bs_unload ...[2024-07-13 16:25:35.770189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:04.461 passed 00:08:04.461 Test: bs_usable_clusters ...passed 00:08:04.461 Test: blob_crc ...[2024-07-13 16:25:35.885031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:04.461 [2024-07-13 16:25:35.885172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:04.461 passed 00:08:04.720 Test: blob_flags ...passed 00:08:04.720 Test: bs_version ...passed 00:08:04.720 Test: blob_set_xattrs_test ...[2024-07-13 16:25:36.057344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:04.720 [2024-07-13 16:25:36.057494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:04.720 passed 00:08:04.980 Test: blob_thin_prov_alloc ...passed 00:08:04.980 Test: blob_insert_cluster_msg_test ...passed 00:08:04.980 Test: blob_thin_prov_rw ...passed 00:08:04.980 Test: blob_thin_prov_rle ...passed 00:08:04.980 Test: blob_thin_prov_rw_iov ...passed 00:08:05.239 Test: blob_snapshot_rw ...passed 00:08:05.239 Test: blob_snapshot_rw_iov ...passed 00:08:05.498 Test: blob_inflate_rw ...passed 00:08:05.498 Test: blob_snapshot_freeze_io ...passed 00:08:05.757 Test: blob_operation_split_rw ...passed 00:08:05.757 Test: blob_operation_split_rw_iov ...passed 00:08:05.757 Test: blob_simultaneous_operations ...[2024-07-13 16:25:37.227072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:05.757 [2024-07-13 16:25:37.227204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:06.016 [2024-07-13 16:25:37.228608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:06.016 [2024-07-13 16:25:37.228656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:06.016 [2024-07-13 16:25:37.242623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:06.016 [2024-07-13 16:25:37.242712] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:06.016 [2024-07-13 16:25:37.242830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:06.016 [2024-07-13 16:25:37.242851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:06.016 passed 00:08:06.016 Test: blob_persist_test ...passed 00:08:06.016 Test: blob_decouple_snapshot ...passed 00:08:06.274 Test: blob_seek_io_unit ...passed 00:08:06.274 Test: blob_nested_freezes ...passed 00:08:06.274 Suite: blob_blob_nocopy_extent 00:08:06.274 Test: blob_write ...passed 00:08:06.274 Test: blob_read ...passed 00:08:06.532 Test: blob_rw_verify ...passed 00:08:06.532 Test: blob_rw_verify_iov_nomem ...passed 00:08:06.532 Test: blob_rw_iov_read_only ...passed 00:08:06.532 Test: blob_xattr ...passed 00:08:06.790 Test: blob_dirty_shutdown ...passed 00:08:06.790 Test: blob_is_degraded ...passed 00:08:06.790 Suite: blob_esnap_bs_nocopy_extent 00:08:06.790 Test: blob_esnap_create ...passed 00:08:06.790 Test: blob_esnap_thread_add_remove ...passed 00:08:07.048 Test: blob_esnap_clone_snapshot ...passed 00:08:07.048 Test: blob_esnap_clone_inflate ...passed 00:08:07.048 Test: blob_esnap_clone_decouple ...passed 00:08:07.048 Test: blob_esnap_clone_reload ...passed 00:08:07.306 Test: blob_esnap_hotplug ...passed 00:08:07.306 Suite: blob_copy_noextent 00:08:07.306 Test: blob_init ...[2024-07-13 16:25:38.519982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:07.306 passed 00:08:07.306 Test: blob_thin_provision ...passed 00:08:07.306 Test: blob_read_only ...passed 00:08:07.306 Test: bs_load ...[2024-07-13 16:25:38.599121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:07.306 passed 00:08:07.306 Test: bs_load_custom_cluster_size ...passed 00:08:07.306 Test: bs_load_after_failed_grow ...passed 00:08:07.306 Test: bs_cluster_sz ...[2024-07-13 16:25:38.641574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:07.306 [2024-07-13 16:25:38.641796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:07.306 [2024-07-13 16:25:38.641841] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:07.306 passed 00:08:07.306 Test: bs_resize_md ...passed 00:08:07.306 Test: bs_destroy ...passed 00:08:07.306 Test: bs_type ...passed 00:08:07.306 Test: bs_super_block ...passed 00:08:07.306 Test: bs_test_recover_cluster_count ...passed 00:08:07.306 Test: bs_grow_live ...passed 00:08:07.306 Test: bs_grow_live_no_space ...passed 00:08:07.564 Test: bs_test_grow ...passed 00:08:07.564 Test: blob_serialize_test ...passed 00:08:07.564 Test: super_block_crc ...passed 00:08:07.564 Test: blob_thin_prov_write_count_io ...passed 00:08:07.564 Test: bs_load_iter_test ...passed 00:08:07.564 Test: blob_relations ...[2024-07-13 16:25:38.901914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:07.564 [2024-07-13 16:25:38.902051] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:07.564 [2024-07-13 16:25:38.902611] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:07.564 [2024-07-13 16:25:38.902645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:07.564 passed 00:08:07.564 Test: blob_relations2 ...[2024-07-13 16:25:38.926531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:07.564 [2024-07-13 16:25:38.926648] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:07.564 [2024-07-13 16:25:38.926674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:07.564 [2024-07-13 16:25:38.926689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:07.564 [2024-07-13 16:25:38.927525] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:07.564 [2024-07-13 16:25:38.927582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:07.564 [2024-07-13 16:25:38.927841] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:07.564 [2024-07-13 16:25:38.927884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:07.564 passed 00:08:07.564 Test: blob_relations3 ...passed 00:08:07.826 Test: blobstore_clean_power_failure ...passed 00:08:07.826 Test: blob_delete_snapshot_power_failure ...[2024-07-13 16:25:39.201640] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:07.826 [2024-07-13 16:25:39.221734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:07.826 [2024-07-13 16:25:39.221849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:07.826 [2024-07-13 16:25:39.221878] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:07.826 [2024-07-13 16:25:39.241819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:07.826 [2024-07-13 16:25:39.241929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:07.826 [2024-07-13 16:25:39.241966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:07.826 [2024-07-13 16:25:39.241991] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:07.826 [2024-07-13 16:25:39.261933] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:07.826 [2024-07-13 16:25:39.262076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:07.826 [2024-07-13 16:25:39.281988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:07.826 [2024-07-13 16:25:39.282119] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:08.093 [2024-07-13 16:25:39.302147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:08.093 [2024-07-13 16:25:39.302272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:08.093 passed 00:08:08.093 Test: blob_create_snapshot_power_failure ...[2024-07-13 16:25:39.361924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:08.093 [2024-07-13 16:25:39.401492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:08.093 [2024-07-13 16:25:39.421679] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:08.093 passed 00:08:08.093 Test: blob_io_unit ...passed 00:08:08.093 Test: blob_io_unit_compatibility ...passed 00:08:08.093 Test: blob_ext_md_pages ...passed 00:08:08.351 Test: blob_esnap_io_4096_4096 ...passed 00:08:08.351 Test: blob_esnap_io_512_512 ...passed 00:08:08.351 Test: blob_esnap_io_4096_512 ...passed 00:08:08.351 Test: blob_esnap_io_512_4096 ...passed 00:08:08.351 Suite: blob_bs_copy_noextent 00:08:08.351 Test: blob_open ...passed 00:08:08.351 Test: blob_create ...[2024-07-13 16:25:39.816849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:08.609 passed 00:08:08.609 Test: blob_create_loop ...passed 00:08:08.609 Test: blob_create_fail ...[2024-07-13 16:25:39.954953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:08.609 passed 00:08:08.609 Test: blob_create_internal ...passed 00:08:08.868 Test: blob_create_zero_extent ...passed 00:08:08.868 Test: blob_snapshot ...passed 00:08:08.868 Test: blob_clone ...passed 00:08:08.868 Test: blob_inflate ...[2024-07-13 16:25:40.260838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:08.868 passed 00:08:09.127 Test: blob_delete ...passed 00:08:09.127 Test: blob_resize_test ...[2024-07-13 16:25:40.379682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:09.127 passed 00:08:09.127 Test: channel_ops ...passed 00:08:09.127 Test: blob_super ...passed 00:08:09.127 Test: blob_rw_verify_iov ...passed 00:08:09.385 Test: blob_unmap ...passed 00:08:09.385 Test: blob_iter ...passed 00:08:09.385 Test: blob_parse_md ...passed 00:08:09.385 Test: bs_load_pending_removal ...passed 00:08:09.385 Test: bs_unload ...[2024-07-13 16:25:40.848479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:09.644 passed 00:08:09.644 Test: bs_usable_clusters ...passed 00:08:09.644 Test: blob_crc ...[2024-07-13 16:25:40.968543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:09.644 [2024-07-13 16:25:40.968708] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:09.644 passed 00:08:09.644 Test: blob_flags ...passed 00:08:09.644 Test: bs_version ...passed 00:08:09.902 Test: blob_set_xattrs_test ...[2024-07-13 16:25:41.148049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:09.902 [2024-07-13 16:25:41.148175] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:09.902 passed 00:08:09.902 Test: blob_thin_prov_alloc ...passed 00:08:10.161 Test: blob_insert_cluster_msg_test ...passed 00:08:10.161 Test: blob_thin_prov_rw ...passed 00:08:10.161 Test: blob_thin_prov_rle ...passed 00:08:10.161 Test: blob_thin_prov_rw_iov ...passed 00:08:10.419 Test: blob_snapshot_rw ...passed 00:08:10.419 Test: blob_snapshot_rw_iov ...passed 00:08:10.677 Test: blob_inflate_rw ...passed 00:08:10.677 Test: blob_snapshot_freeze_io ...passed 00:08:10.936 Test: blob_operation_split_rw ...passed 00:08:10.936 Test: blob_operation_split_rw_iov ...passed 00:08:10.936 Test: blob_simultaneous_operations ...[2024-07-13 16:25:42.367178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:10.936 [2024-07-13 16:25:42.367310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:10.936 [2024-07-13 16:25:42.367911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:10.936 [2024-07-13 16:25:42.367957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:10.936 [2024-07-13 16:25:42.371479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:10.936 [2024-07-13 16:25:42.371530] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:10.936 [2024-07-13 16:25:42.371626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:10.936 [2024-07-13 16:25:42.371643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:11.195 passed 00:08:11.195 Test: blob_persist_test ...passed 00:08:11.195 Test: blob_decouple_snapshot ...passed 00:08:11.195 Test: blob_seek_io_unit ...passed 00:08:11.195 Test: blob_nested_freezes ...passed 00:08:11.195 Suite: blob_blob_copy_noextent 00:08:11.453 Test: blob_write ...passed 00:08:11.453 Test: blob_read ...passed 00:08:11.453 Test: blob_rw_verify ...passed 00:08:11.453 Test: blob_rw_verify_iov_nomem ...passed 00:08:11.710 Test: blob_rw_iov_read_only ...passed 00:08:11.710 Test: blob_xattr ...passed 00:08:11.710 Test: blob_dirty_shutdown ...passed 00:08:11.710 Test: blob_is_degraded ...passed 00:08:11.710 Suite: blob_esnap_bs_copy_noextent 00:08:11.968 Test: blob_esnap_create ...passed 00:08:11.968 Test: blob_esnap_thread_add_remove ...passed 00:08:11.968 Test: blob_esnap_clone_snapshot ...passed 00:08:11.968 Test: blob_esnap_clone_inflate ...passed 00:08:12.227 Test: blob_esnap_clone_decouple ...passed 00:08:12.227 Test: blob_esnap_clone_reload ...passed 00:08:12.227 Test: blob_esnap_hotplug ...passed 00:08:12.227 Suite: blob_copy_extent 00:08:12.227 Test: blob_init ...[2024-07-13 16:25:43.575115] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:12.227 passed 00:08:12.227 Test: blob_thin_provision ...passed 00:08:12.227 Test: blob_read_only ...passed 00:08:12.227 Test: bs_load ...[2024-07-13 16:25:43.653421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:12.227 passed 00:08:12.227 Test: bs_load_custom_cluster_size ...passed 00:08:12.227 Test: bs_load_after_failed_grow ...passed 00:08:12.227 Test: bs_cluster_sz ...[2024-07-13 16:25:43.694294] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:12.227 [2024-07-13 16:25:43.694520] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:12.227 [2024-07-13 16:25:43.694557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:12.485 passed 00:08:12.485 Test: bs_resize_md ...passed 00:08:12.485 Test: bs_destroy ...passed 00:08:12.485 Test: bs_type ...passed 00:08:12.485 Test: bs_super_block ...passed 00:08:12.485 Test: bs_test_recover_cluster_count ...passed 00:08:12.485 Test: bs_grow_live ...passed 00:08:12.485 Test: bs_grow_live_no_space ...passed 00:08:12.485 Test: bs_test_grow ...passed 00:08:12.485 Test: blob_serialize_test ...passed 00:08:12.485 Test: super_block_crc ...passed 00:08:12.485 Test: blob_thin_prov_write_count_io ...passed 00:08:12.485 Test: bs_load_iter_test ...passed 00:08:12.745 Test: blob_relations ...[2024-07-13 16:25:43.963448] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:12.745 [2024-07-13 16:25:43.963583] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:12.745 [2024-07-13 16:25:43.964463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:12.745 [2024-07-13 16:25:43.964517] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:12.745 passed 00:08:12.745 Test: blob_relations2 ...[2024-07-13 16:25:43.989152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:12.745 [2024-07-13 16:25:43.989289] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:12.745 [2024-07-13 16:25:43.989350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:12.745 [2024-07-13 16:25:43.989377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:12.745 [2024-07-13 16:25:43.990717] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:12.745 [2024-07-13 16:25:43.990780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:12.745 [2024-07-13 16:25:43.991174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:12.745 [2024-07-13 16:25:43.991225] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:12.745 passed 00:08:12.745 Test: blob_relations3 ...passed 00:08:13.004 Test: blobstore_clean_power_failure ...passed 00:08:13.004 Test: blob_delete_snapshot_power_failure ...[2024-07-13 16:25:44.285928] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:13.004 [2024-07-13 16:25:44.307164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:13.004 [2024-07-13 16:25:44.328428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:13.004 [2024-07-13 16:25:44.328541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:13.004 [2024-07-13 16:25:44.328574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.004 [2024-07-13 16:25:44.353971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:13.004 [2024-07-13 16:25:44.354078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:13.004 [2024-07-13 16:25:44.354103] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:13.004 [2024-07-13 16:25:44.354130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.005 [2024-07-13 16:25:44.374705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:13.005 [2024-07-13 16:25:44.374808] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:13.005 [2024-07-13 16:25:44.374848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:13.005 [2024-07-13 16:25:44.374875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.005 [2024-07-13 16:25:44.395590] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:13.005 [2024-07-13 16:25:44.395738] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.005 [2024-07-13 16:25:44.416465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:13.005 [2024-07-13 16:25:44.416585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.005 [2024-07-13 16:25:44.437044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:13.005 [2024-07-13 16:25:44.437157] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.263 passed 00:08:13.263 Test: blob_create_snapshot_power_failure ...[2024-07-13 16:25:44.499497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:13.263 [2024-07-13 16:25:44.520017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:13.263 [2024-07-13 16:25:44.560246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:13.263 [2024-07-13 16:25:44.580820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:13.263 passed 00:08:13.263 Test: blob_io_unit ...passed 00:08:13.263 Test: blob_io_unit_compatibility ...passed 00:08:13.263 Test: blob_ext_md_pages ...passed 00:08:13.534 Test: blob_esnap_io_4096_4096 ...passed 00:08:13.534 Test: blob_esnap_io_512_512 ...passed 00:08:13.534 Test: blob_esnap_io_4096_512 ...passed 00:08:13.534 Test: blob_esnap_io_512_4096 ...passed 00:08:13.534 Suite: blob_bs_copy_extent 00:08:13.534 Test: blob_open ...passed 00:08:13.534 Test: blob_create ...[2024-07-13 16:25:44.983864] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:13.807 passed 00:08:13.807 Test: blob_create_loop ...passed 00:08:13.807 Test: blob_create_fail ...[2024-07-13 16:25:45.139545] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:13.807 passed 00:08:13.807 Test: blob_create_internal ...passed 00:08:13.807 Test: blob_create_zero_extent ...passed 00:08:14.066 Test: blob_snapshot ...passed 00:08:14.066 Test: blob_clone ...passed 00:08:14.066 Test: blob_inflate ...[2024-07-13 16:25:45.441092] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:14.066 passed 00:08:14.066 Test: blob_delete ...passed 00:08:14.325 Test: blob_resize_test ...[2024-07-13 16:25:45.559682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:14.325 passed 00:08:14.325 Test: channel_ops ...passed 00:08:14.325 Test: blob_super ...passed 00:08:14.325 Test: blob_rw_verify_iov ...passed 00:08:14.583 Test: blob_unmap ...passed 00:08:14.583 Test: blob_iter ...passed 00:08:14.583 Test: blob_parse_md ...passed 00:08:14.583 Test: bs_load_pending_removal ...passed 00:08:14.584 Test: bs_unload ...[2024-07-13 16:25:46.044071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:14.843 passed 00:08:14.843 Test: bs_usable_clusters ...passed 00:08:14.843 Test: blob_crc ...[2024-07-13 16:25:46.163256] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:14.843 [2024-07-13 16:25:46.163423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:14.843 passed 00:08:14.843 Test: blob_flags ...passed 00:08:14.843 Test: bs_version ...passed 00:08:15.102 Test: blob_set_xattrs_test ...[2024-07-13 16:25:46.336685] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:15.102 [2024-07-13 16:25:46.336821] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:15.102 passed 00:08:15.102 Test: blob_thin_prov_alloc ...passed 00:08:15.102 Test: blob_insert_cluster_msg_test ...passed 00:08:15.361 Test: blob_thin_prov_rw ...passed 00:08:15.361 Test: blob_thin_prov_rle ...passed 00:08:15.361 Test: blob_thin_prov_rw_iov ...passed 00:08:15.361 Test: blob_snapshot_rw ...passed 00:08:15.619 Test: blob_snapshot_rw_iov ...passed 00:08:15.877 Test: blob_inflate_rw ...passed 00:08:15.877 Test: blob_snapshot_freeze_io ...passed 00:08:15.877 Test: blob_operation_split_rw ...passed 00:08:16.136 Test: blob_operation_split_rw_iov ...passed 00:08:16.136 Test: blob_simultaneous_operations ...[2024-07-13 16:25:47.523731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:16.136 [2024-07-13 16:25:47.523832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:16.136 [2024-07-13 16:25:47.524395] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:16.136 [2024-07-13 16:25:47.524437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:16.136 [2024-07-13 16:25:47.527740] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:16.136 [2024-07-13 16:25:47.527788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:16.136 [2024-07-13 16:25:47.527897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:16.136 [2024-07-13 16:25:47.527919] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:16.136 passed 00:08:16.396 Test: blob_persist_test ...passed 00:08:16.396 Test: blob_decouple_snapshot ...passed 00:08:16.396 Test: blob_seek_io_unit ...passed 00:08:16.396 Test: blob_nested_freezes ...passed 00:08:16.396 Suite: blob_blob_copy_extent 00:08:16.396 Test: blob_write ...passed 00:08:16.654 Test: blob_read ...passed 00:08:16.654 Test: blob_rw_verify ...passed 00:08:16.654 Test: blob_rw_verify_iov_nomem ...passed 00:08:16.912 Test: blob_rw_iov_read_only ...passed 00:08:16.912 Test: blob_xattr ...passed 00:08:16.912 Test: blob_dirty_shutdown ...passed 00:08:16.912 Test: blob_is_degraded ...passed 00:08:16.912 Suite: blob_esnap_bs_copy_extent 00:08:16.912 Test: blob_esnap_create ...passed 00:08:17.170 Test: blob_esnap_thread_add_remove ...passed 00:08:17.170 Test: blob_esnap_clone_snapshot ...passed 00:08:17.170 Test: blob_esnap_clone_inflate ...passed 00:08:17.170 Test: blob_esnap_clone_decouple ...passed 00:08:17.428 Test: blob_esnap_clone_reload ...passed 00:08:17.428 Test: blob_esnap_hotplug ...passed 00:08:17.428 00:08:17.428 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.428 suites 16 16 n/a 0 0 00:08:17.428 tests 348 348 348 0 0 00:08:17.428 asserts 92605 92605 92605 0 n/a 00:08:17.428 00:08:17.428 Elapsed time = 20.687 seconds 00:08:17.428 16:25:48 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:08:17.428 00:08:17.428 00:08:17.428 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.428 http://cunit.sourceforge.net/ 00:08:17.428 00:08:17.428 00:08:17.428 Suite: blob_bdev 00:08:17.428 Test: create_bs_dev ...passed 00:08:17.428 Test: create_bs_dev_ro ...[2024-07-13 16:25:48.851387] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:08:17.428 passed 00:08:17.428 Test: create_bs_dev_rw ...passed 00:08:17.428 Test: claim_bs_dev ...[2024-07-13 16:25:48.852601] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:08:17.428 passed 00:08:17.428 Test: claim_bs_dev_ro ...passed 00:08:17.428 Test: deferred_destroy_refs ...passed 00:08:17.428 Test: deferred_destroy_channels ...passed 00:08:17.428 Test: deferred_destroy_threads ...passed 00:08:17.428 00:08:17.428 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.428 suites 1 1 n/a 0 0 00:08:17.428 tests 8 8 8 0 0 00:08:17.428 asserts 119 119 119 0 n/a 00:08:17.428 00:08:17.428 Elapsed time = 0.001 seconds 00:08:17.428 16:25:48 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:08:17.428 00:08:17.428 00:08:17.428 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.428 http://cunit.sourceforge.net/ 00:08:17.428 00:08:17.428 00:08:17.428 Suite: tree 00:08:17.428 Test: blobfs_tree_op_test ...passed 00:08:17.428 00:08:17.428 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.428 suites 1 1 n/a 0 0 00:08:17.428 tests 1 1 1 0 0 00:08:17.428 asserts 27 27 27 0 n/a 00:08:17.429 00:08:17.429 Elapsed time = 0.000 seconds 00:08:17.687 16:25:48 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:08:17.687 00:08:17.687 00:08:17.687 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.687 http://cunit.sourceforge.net/ 00:08:17.687 00:08:17.687 00:08:17.687 Suite: blobfs_async_ut 00:08:17.687 Test: fs_init ...passed 00:08:17.687 Test: fs_open ...passed 00:08:17.688 Test: fs_create ...passed 00:08:17.688 Test: fs_truncate ...passed 00:08:17.688 Test: fs_rename ...[2024-07-13 16:25:49.133486] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:08:17.688 passed 00:08:17.688 Test: fs_rw_async ...passed 00:08:17.946 Test: fs_writev_readv_async ...passed 00:08:17.946 Test: tree_find_buffer_ut ...passed 00:08:17.946 Test: channel_ops ...passed 00:08:17.946 Test: channel_ops_sync ...passed 00:08:17.946 00:08:17.946 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.946 suites 1 1 n/a 0 0 00:08:17.946 tests 10 10 10 0 0 00:08:17.946 asserts 292 292 292 0 n/a 00:08:17.946 00:08:17.946 Elapsed time = 0.279 seconds 00:08:17.946 16:25:49 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:08:17.946 00:08:17.946 00:08:17.946 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.946 http://cunit.sourceforge.net/ 00:08:17.946 00:08:17.946 00:08:17.946 Suite: blobfs_sync_ut 00:08:17.946 Test: cache_read_after_write ...[2024-07-13 16:25:49.381551] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:08:17.946 passed 00:08:17.946 Test: file_length ...passed 00:08:18.205 Test: append_write_to_extend_blob ...passed 00:08:18.205 Test: partial_buffer ...passed 00:08:18.205 Test: cache_write_null_buffer ...passed 00:08:18.205 Test: fs_create_sync ...passed 00:08:18.205 Test: fs_rename_sync ...passed 00:08:18.205 Test: cache_append_no_cache ...passed 00:08:18.205 Test: fs_delete_file_without_close ...passed 00:08:18.205 00:08:18.205 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.205 suites 1 1 n/a 0 0 00:08:18.205 tests 9 9 9 0 0 00:08:18.205 asserts 345 345 345 0 n/a 00:08:18.205 00:08:18.205 Elapsed time = 0.510 seconds 00:08:18.205 16:25:49 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:08:18.205 00:08:18.205 00:08:18.205 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.205 http://cunit.sourceforge.net/ 00:08:18.205 00:08:18.205 00:08:18.205 Suite: blobfs_bdev_ut 00:08:18.205 Test: spdk_blobfs_bdev_detect_test ...[2024-07-13 16:25:49.629625] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:18.205 passed 00:08:18.205 Test: spdk_blobfs_bdev_create_test ...[2024-07-13 16:25:49.630072] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:18.205 passed 00:08:18.205 Test: spdk_blobfs_bdev_mount_test ...passed 00:08:18.205 00:08:18.205 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.205 suites 1 1 n/a 0 0 00:08:18.205 tests 3 3 3 0 0 00:08:18.205 asserts 9 9 9 0 n/a 00:08:18.205 00:08:18.205 Elapsed time = 0.001 seconds 00:08:18.205 00:08:18.205 real 0m21.647s 00:08:18.205 user 0m20.989s 00:08:18.205 sys 0m0.933s 00:08:18.205 16:25:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.205 16:25:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.205 ************************************ 00:08:18.205 END TEST unittest_blob_blobfs 00:08:18.205 ************************************ 00:08:18.464 16:25:49 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:08:18.464 16:25:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:18.464 16:25:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.464 16:25:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.464 ************************************ 00:08:18.464 START TEST unittest_event 00:08:18.464 ************************************ 00:08:18.464 16:25:49 -- common/autotest_common.sh@1104 -- # unittest_event 00:08:18.464 16:25:49 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:08:18.464 00:08:18.464 00:08:18.464 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.464 http://cunit.sourceforge.net/ 00:08:18.464 00:08:18.464 00:08:18.464 Suite: app_suite 00:08:18.464 Test: test_spdk_app_parse_args ...app_ut: invalid option -- 'z' 00:08:18.464 app_ut [options] 00:08:18.464 options: 00:08:18.464 -c, --config JSON config file (default none) 00:08:18.464 --json JSON config file (default none) 00:08:18.465 --json-ignore-init-errors 00:08:18.465 don't exit on invalid config entry 00:08:18.465 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:18.465 -g, --single-file-segments 00:08:18.465 force creating just one hugetlbfs file 00:08:18.465 -h, --help show this usage 00:08:18.465 -i, --shm-id shared memory ID (optional) 00:08:18.465 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:18.465 --lcores lcore to CPU mapping list. The list is in the format: 00:08:18.465 [<,lcores[@CPUs]>...] 00:08:18.465 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:18.465 Within the group, '-' is used for range separator, 00:08:18.465 ',' is used for single number separator. 00:08:18.465 '( )' can be omitted for single element group, 00:08:18.465 '@' can be omitted if cpus and lcores have the same value 00:08:18.465 -n, --mem-channels channel number of memory channels used for DPDK 00:08:18.465 -p, --main-core main (primary) core for DPDK 00:08:18.465 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:18.465 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:18.465 --disable-cpumask-locks Disable CPU core lock files. 00:08:18.465 --silence-noticelog disable notice level logging to stderr 00:08:18.465 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:18.465 -u, --no-pci disable PCI access 00:08:18.465 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:18.465 --max-delay maximum reactor delay (in microseconds) 00:08:18.465 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:18.465 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:18.465 -R, --huge-unlink unlink huge files after initialization 00:08:18.465 -v, --version print SPDK version 00:08:18.465 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:18.465 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:18.465 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:18.465 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:18.465 Tracepoints vary in size and can use more than one trace entry. 00:08:18.465 --rpcs-allowed comma-separated list of permitted RPCS 00:08:18.465 --env-context Opaque context for use of the env implementation 00:08:18.465 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:18.465 --no-huge run without using hugepages 00:08:18.465 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:08:18.465 -e, --tpoint-group [:] 00:08:18.465 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:08:18.465 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:18.465 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:08:18.465 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:18.465 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:18.465 app_ut [options] 00:08:18.465 options: 00:08:18.465 -c, --config JSON config file (default none) 00:08:18.465 --json JSON config file (default none) 00:08:18.465 --json-ignore-init-errors 00:08:18.465 don't exit on invalid config entry 00:08:18.465 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:18.465 -g, --single-file-segments 00:08:18.465 force creating just one hugetlbfs file 00:08:18.465 -h, --help show this usage 00:08:18.465 -i, --shm-id shared memory ID (optional) 00:08:18.465 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:18.465 --lcores lcore to CPU mapping list. The list is in the format: 00:08:18.465 [<,lcores[@CPUs]>...] 00:08:18.465 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:18.465 Within the group, '-' is used for range separator, 00:08:18.465 ',' is used for single number separator. 00:08:18.465 '( )' can be omitted for single element group, 00:08:18.465 '@' can be omitted if cpus and lcores have the same value 00:08:18.465 -n, --mem-channels channel number of memory channels used for DPDK 00:08:18.465 -p, --main-core main (primary) core for DPDK 00:08:18.465 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:18.465 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:18.465 --disable-cpumask-locks Disable CPU core lock files. 00:08:18.465 --silence-noticelog disable notice level logging to stderr 00:08:18.465 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:18.465 -u, --no-pci disable PCI access 00:08:18.465 app_ut: unrecognized option '--test-long-opt' 00:08:18.465 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:18.465 --max-delay maximum reactor delay (in microseconds) 00:08:18.465 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:18.465 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:18.465 -R, --huge-unlink unlink huge files after initialization 00:08:18.465 -v, --version print SPDK version 00:08:18.465 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:18.465 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:18.465 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:18.465 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:18.465 Tracepoints vary in size and can use more than one trace entry. 00:08:18.465 --rpcs-allowed comma-separated list of permitted RPCS 00:08:18.465 --env-context Opaque context for use of the env implementation 00:08:18.465 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:18.465 --no-huge run without using hugepages 00:08:18.465 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:08:18.465 -e, --tpoint-group [:] 00:08:18.465 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:08:18.465 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:18.465 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:08:18.465 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:18.465 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:18.465 [2024-07-13 16:25:49.743793] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:08:18.465 app_ut [options] 00:08:18.465 options: 00:08:18.465 -c, --config JSON config file (default none) 00:08:18.465 --json JSON config file (default none) 00:08:18.465 --json-ignore-init-errors 00:08:18.465 don't exit on invalid config entry 00:08:18.465 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:18.465 -g, --single-file-segments 00:08:18.465 force creating just one hugetlbfs file 00:08:18.465 -h, --help show this usage 00:08:18.465 -i, --shm-id shared memory ID (optional) 00:08:18.465 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:18.465 --lcores lcore to CPU mapping list. The list is in the format:[2024-07-13 16:25:49.744124] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:08:18.465 00:08:18.465 [<,lcores[@CPUs]>...] 00:08:18.465 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:18.465 Within the group, '-' is used for range separator, 00:08:18.465 ',' is used for single number separator. 00:08:18.465 '( )' can be omitted for single element group, 00:08:18.465 '@' can be omitted if cpus and lcores have the same value 00:08:18.465 -n, --mem-channels channel number of memory channels used for DPDK 00:08:18.465 -p, --main-core main (primary) core for DPDK 00:08:18.465 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:18.465 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:18.465 --disable-cpumask-locks Disable CPU core lock files. 00:08:18.465 --silence-noticelog disable notice level logging to stderr 00:08:18.465 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:18.465 -u, --no-pci disable PCI access 00:08:18.465 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:18.465 --max-delay maximum reactor delay (in microseconds) 00:08:18.465 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:18.465 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:18.465 -R, --huge-unlink unlink huge files after initialization 00:08:18.465 -v, --version print SPDK version 00:08:18.465 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:18.465 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:18.465 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:18.465 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:18.465 Tracepoints vary in size and can use more than one trace entry. 00:08:18.465 --rpcs-allowed comma-separated list of permitted RPCS 00:08:18.465 --env-context Opaque context for use of the env implementation 00:08:18.465 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:18.465 --no-huge run without using hugepages 00:08:18.465 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:08:18.465 -e, --tpoint-group [:] 00:08:18.465 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:08:18.465 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:18.465 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:08:18.465 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:18.465 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:18.466 passed 00:08:18.466 00:08:18.466 [2024-07-13 16:25:49.744368] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:08:18.466 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.466 suites 1 1 n/a 0 0 00:08:18.466 tests 1 1 1 0 0 00:08:18.466 asserts 8 8 8 0 n/a 00:08:18.466 00:08:18.466 Elapsed time = 0.001 seconds 00:08:18.466 16:25:49 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:08:18.466 00:08:18.466 00:08:18.466 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.466 http://cunit.sourceforge.net/ 00:08:18.466 00:08:18.466 00:08:18.466 Suite: app_suite 00:08:18.466 Test: test_create_reactor ...passed 00:08:18.466 Test: test_init_reactors ...passed 00:08:18.466 Test: test_event_call ...passed 00:08:18.466 Test: test_schedule_thread ...passed 00:08:18.466 Test: test_reschedule_thread ...passed 00:08:18.466 Test: test_bind_thread ...passed 00:08:18.466 Test: test_for_each_reactor ...passed 00:08:18.466 Test: test_reactor_stats ...passed 00:08:18.466 Test: test_scheduler ...passed 00:08:18.466 Test: test_governor ...passed 00:08:18.466 00:08:18.466 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.466 suites 1 1 n/a 0 0 00:08:18.466 tests 10 10 10 0 0 00:08:18.466 asserts 344 344 344 0 n/a 00:08:18.466 00:08:18.466 Elapsed time = 0.015 seconds 00:08:18.466 00:08:18.466 real 0m0.093s 00:08:18.466 user 0m0.049s 00:08:18.466 sys 0m0.045s 00:08:18.466 16:25:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.466 16:25:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.466 ************************************ 00:08:18.466 END TEST unittest_event 00:08:18.466 ************************************ 00:08:18.466 16:25:49 -- unit/unittest.sh@233 -- # uname -s 00:08:18.466 16:25:49 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:08:18.466 16:25:49 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:08:18.466 16:25:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:18.466 16:25:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.466 16:25:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.466 ************************************ 00:08:18.466 START TEST unittest_ftl 00:08:18.466 ************************************ 00:08:18.466 16:25:49 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:08:18.466 16:25:49 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:08:18.466 00:08:18.466 00:08:18.466 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.466 http://cunit.sourceforge.net/ 00:08:18.466 00:08:18.466 00:08:18.466 Suite: ftl_band_suite 00:08:18.724 Test: test_band_block_offset_from_addr_base ...passed 00:08:18.724 Test: test_band_block_offset_from_addr_offset ...passed 00:08:18.724 Test: test_band_addr_from_block_offset ...passed 00:08:18.724 Test: test_band_set_addr ...passed 00:08:18.724 Test: test_invalidate_addr ...passed 00:08:18.724 Test: test_next_xfer_addr ...passed 00:08:18.724 00:08:18.724 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.724 suites 1 1 n/a 0 0 00:08:18.724 tests 6 6 6 0 0 00:08:18.724 asserts 30356 30356 30356 0 n/a 00:08:18.724 00:08:18.724 Elapsed time = 0.275 seconds 00:08:18.983 16:25:50 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:08:18.983 00:08:18.983 00:08:18.983 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.983 http://cunit.sourceforge.net/ 00:08:18.983 00:08:18.983 00:08:18.983 Suite: ftl_bitmap 00:08:18.983 Test: test_ftl_bitmap_create ...[2024-07-13 16:25:50.273026] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:08:18.983 passed 00:08:18.983 Test: test_ftl_bitmap_get ...passed 00:08:18.983 Test: test_ftl_bitmap_set ...passed 00:08:18.983 Test: test_ftl_bitmap_clear ...[2024-07-13 16:25:50.273353] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:08:18.983 passed 00:08:18.983 Test: test_ftl_bitmap_find_first_set ...passed 00:08:18.983 Test: test_ftl_bitmap_find_first_clear ...passed 00:08:18.983 Test: test_ftl_bitmap_count_set ...passed 00:08:18.983 00:08:18.983 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.983 suites 1 1 n/a 0 0 00:08:18.983 tests 7 7 7 0 0 00:08:18.983 asserts 137 137 137 0 n/a 00:08:18.983 00:08:18.983 Elapsed time = 0.001 seconds 00:08:18.983 16:25:50 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:08:18.983 00:08:18.983 00:08:18.983 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.983 http://cunit.sourceforge.net/ 00:08:18.983 00:08:18.983 00:08:18.983 Suite: ftl_io_suite 00:08:18.983 Test: test_completion ...passed 00:08:18.983 Test: test_multiple_ios ...passed 00:08:18.983 00:08:18.983 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.983 suites 1 1 n/a 0 0 00:08:18.983 tests 2 2 2 0 0 00:08:18.983 asserts 47 47 47 0 n/a 00:08:18.983 00:08:18.983 Elapsed time = 0.003 seconds 00:08:18.983 16:25:50 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:08:18.983 00:08:18.983 00:08:18.983 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.983 http://cunit.sourceforge.net/ 00:08:18.983 00:08:18.983 00:08:18.983 Suite: ftl_mngt 00:08:18.983 Test: test_next_step ...passed 00:08:18.983 Test: test_continue_step ...passed 00:08:18.983 Test: test_get_func_and_step_cntx_alloc ...passed 00:08:18.983 Test: test_fail_step ...passed 00:08:18.983 Test: test_mngt_call_and_call_rollback ...passed 00:08:18.983 Test: test_nested_process_failure ...passed 00:08:18.983 00:08:18.983 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.983 suites 1 1 n/a 0 0 00:08:18.983 tests 6 6 6 0 0 00:08:18.983 asserts 176 176 176 0 n/a 00:08:18.983 00:08:18.983 Elapsed time = 0.001 seconds 00:08:18.983 16:25:50 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:08:18.983 00:08:18.983 00:08:18.983 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.983 http://cunit.sourceforge.net/ 00:08:18.983 00:08:18.983 00:08:18.983 Suite: ftl_mempool 00:08:18.983 Test: test_ftl_mempool_create ...passed 00:08:18.983 Test: test_ftl_mempool_get_put ...passed 00:08:18.983 00:08:18.983 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.983 suites 1 1 n/a 0 0 00:08:18.983 tests 2 2 2 0 0 00:08:18.983 asserts 36 36 36 0 n/a 00:08:18.983 00:08:18.983 Elapsed time = 0.000 seconds 00:08:18.983 16:25:50 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:08:18.983 00:08:18.983 00:08:18.983 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.983 http://cunit.sourceforge.net/ 00:08:18.983 00:08:18.983 00:08:18.983 Suite: ftl_addr64_suite 00:08:18.983 Test: test_addr_cached ...passed 00:08:18.983 00:08:18.983 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.983 suites 1 1 n/a 0 0 00:08:18.983 tests 1 1 1 0 0 00:08:18.983 asserts 1536 1536 1536 0 n/a 00:08:18.983 00:08:18.983 Elapsed time = 0.000 seconds 00:08:18.983 16:25:50 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:08:19.242 00:08:19.242 00:08:19.242 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.242 http://cunit.sourceforge.net/ 00:08:19.242 00:08:19.242 00:08:19.242 Suite: ftl_sb 00:08:19.242 Test: test_sb_crc_v2 ...passed 00:08:19.242 Test: test_sb_crc_v3 ...passed 00:08:19.242 Test: test_sb_v3_md_layout ...[2024-07-13 16:25:50.470967] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:08:19.242 [2024-07-13 16:25:50.471976] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:19.242 [2024-07-13 16:25:50.472193] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:19.242 [2024-07-13 16:25:50.472561] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:19.242 [2024-07-13 16:25:50.472720] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:19.242 [2024-07-13 16:25:50.472952] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:08:19.242 [2024-07-13 16:25:50.473131] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:19.242 [2024-07-13 16:25:50.473346] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:19.243 [2024-07-13 16:25:50.473548] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:19.243 [2024-07-13 16:25:50.473711] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:19.243 [2024-07-13 16:25:50.473866] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:19.243 passed 00:08:19.243 Test: test_sb_v5_md_layout ...passed 00:08:19.243 00:08:19.243 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.243 suites 1 1 n/a 0 0 00:08:19.243 tests 4 4 4 0 0 00:08:19.243 asserts 148 148 148 0 n/a 00:08:19.243 00:08:19.243 Elapsed time = 0.003 seconds 00:08:19.243 16:25:50 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:08:19.243 00:08:19.243 00:08:19.243 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.243 http://cunit.sourceforge.net/ 00:08:19.243 00:08:19.243 00:08:19.243 Suite: ftl_layout_upgrade 00:08:19.243 Test: test_l2p_upgrade ...passed 00:08:19.243 00:08:19.243 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.243 suites 1 1 n/a 0 0 00:08:19.243 tests 1 1 1 0 0 00:08:19.243 asserts 140 140 140 0 n/a 00:08:19.243 00:08:19.243 Elapsed time = 0.001 seconds 00:08:19.243 00:08:19.243 real 0m0.648s 00:08:19.243 user 0m0.307s 00:08:19.243 sys 0m0.342s 00:08:19.243 16:25:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.243 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.243 ************************************ 00:08:19.243 END TEST unittest_ftl 00:08:19.243 ************************************ 00:08:19.243 16:25:50 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:19.243 16:25:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:19.243 16:25:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.243 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.243 ************************************ 00:08:19.243 START TEST unittest_accel 00:08:19.243 ************************************ 00:08:19.243 16:25:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:19.243 00:08:19.243 00:08:19.243 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.243 http://cunit.sourceforge.net/ 00:08:19.243 00:08:19.243 00:08:19.243 Suite: accel_sequence 00:08:19.243 Test: test_sequence_fill_copy ...passed 00:08:19.243 Test: test_sequence_abort ...passed 00:08:19.243 Test: test_sequence_append_error ...passed 00:08:19.243 Test: test_sequence_completion_error ...[2024-07-13 16:25:50.636222] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f50a26797c0 00:08:19.243 [2024-07-13 16:25:50.636605] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f50a26797c0 00:08:19.243 [2024-07-13 16:25:50.636647] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f50a26797c0 00:08:19.243 [2024-07-13 16:25:50.636696] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f50a26797c0 00:08:19.243 passed 00:08:19.243 Test: test_sequence_decompress ...passed 00:08:19.243 Test: test_sequence_reverse ...passed 00:08:19.243 Test: test_sequence_copy_elision ...passed 00:08:19.243 Test: test_sequence_accel_buffers ...passed 00:08:19.243 Test: test_sequence_memory_domain ...[2024-07-13 16:25:50.647324] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:08:19.243 passed 00:08:19.243 Test: test_sequence_module_memory_domain ...[2024-07-13 16:25:50.647492] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:08:19.243 passed 00:08:19.243 Test: test_sequence_crypto ...passed 00:08:19.243 Test: test_sequence_driver ...[2024-07-13 16:25:50.653890] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f50a1a517c0 using driver: ut 00:08:19.243 [2024-07-13 16:25:50.654004] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f50a1a517c0 through driver: ut 00:08:19.243 passed 00:08:19.243 Test: test_sequence_same_iovs ...passed 00:08:19.243 Test: test_sequence_crc32 ...passed 00:08:19.243 Suite: accel 00:08:19.243 Test: test_spdk_accel_task_complete ...passed 00:08:19.243 Test: test_get_task ...passed 00:08:19.243 Test: test_spdk_accel_submit_copy ...passed 00:08:19.243 Test: test_spdk_accel_submit_dualcast ...passed 00:08:19.243 Test: test_spdk_accel_submit_compare ...passed 00:08:19.243 Test: test_spdk_accel_submit_fill ...passed 00:08:19.243 Test: test_spdk_accel_submit_crc32c ...passed 00:08:19.243 Test: test_spdk_accel_submit_crc32cv ...passed 00:08:19.243 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:08:19.243 Test: test_spdk_accel_submit_xor ...[2024-07-13 16:25:50.658536] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:19.243 [2024-07-13 16:25:50.658600] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:19.243 passed 00:08:19.243 Test: test_spdk_accel_module_find_by_name ...passed 00:08:19.243 Test: test_spdk_accel_module_register ...passed 00:08:19.243 00:08:19.243 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.243 suites 2 2 n/a 0 0 00:08:19.243 tests 26 26 26 0 0 00:08:19.243 asserts 831 831 831 0 n/a 00:08:19.243 00:08:19.243 Elapsed time = 0.032 seconds 00:08:19.243 00:08:19.243 real 0m0.080s 00:08:19.243 user 0m0.040s 00:08:19.243 sys 0m0.040s 00:08:19.243 16:25:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.243 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.243 ************************************ 00:08:19.243 END TEST unittest_accel 00:08:19.243 ************************************ 00:08:19.502 16:25:50 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:19.502 16:25:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:19.502 16:25:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.502 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.502 ************************************ 00:08:19.502 START TEST unittest_ioat 00:08:19.502 ************************************ 00:08:19.502 16:25:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:19.502 00:08:19.502 00:08:19.502 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.502 http://cunit.sourceforge.net/ 00:08:19.502 00:08:19.502 00:08:19.502 Suite: ioat 00:08:19.502 Test: ioat_state_check ...passed 00:08:19.502 00:08:19.502 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.502 suites 1 1 n/a 0 0 00:08:19.502 tests 1 1 1 0 0 00:08:19.502 asserts 32 32 32 0 n/a 00:08:19.502 00:08:19.502 Elapsed time = 0.000 seconds 00:08:19.502 00:08:19.502 real 0m0.039s 00:08:19.502 user 0m0.025s 00:08:19.502 sys 0m0.014s 00:08:19.502 16:25:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.502 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.502 ************************************ 00:08:19.502 END TEST unittest_ioat 00:08:19.502 ************************************ 00:08:19.502 16:25:50 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:19.502 16:25:50 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:19.502 16:25:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:19.502 16:25:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.502 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.502 ************************************ 00:08:19.502 START TEST unittest_idxd_user 00:08:19.502 ************************************ 00:08:19.502 16:25:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:19.502 00:08:19.502 00:08:19.502 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.502 http://cunit.sourceforge.net/ 00:08:19.502 00:08:19.502 00:08:19.502 Suite: idxd_user 00:08:19.502 Test: test_idxd_wait_cmd ...[2024-07-13 16:25:50.875158] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:19.502 passed 00:08:19.502 Test: test_idxd_reset_dev ...[2024-07-13 16:25:50.875501] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:08:19.502 [2024-07-13 16:25:50.875657] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:19.502 [2024-07-13 16:25:50.875712] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:08:19.502 passed 00:08:19.502 Test: test_idxd_group_config ...passed 00:08:19.502 Test: test_idxd_wq_config ...passed 00:08:19.502 00:08:19.502 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.502 suites 1 1 n/a 0 0 00:08:19.502 tests 4 4 4 0 0 00:08:19.502 asserts 20 20 20 0 n/a 00:08:19.502 00:08:19.502 Elapsed time = 0.001 seconds 00:08:19.502 00:08:19.502 real 0m0.040s 00:08:19.502 user 0m0.024s 00:08:19.502 sys 0m0.016s 00:08:19.502 16:25:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.502 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.502 ************************************ 00:08:19.502 END TEST unittest_idxd_user 00:08:19.502 ************************************ 00:08:19.502 16:25:50 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:08:19.502 16:25:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:19.502 16:25:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.502 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.502 ************************************ 00:08:19.502 START TEST unittest_iscsi 00:08:19.502 ************************************ 00:08:19.502 16:25:50 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:08:19.502 16:25:50 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:08:19.762 00:08:19.762 00:08:19.762 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.763 http://cunit.sourceforge.net/ 00:08:19.763 00:08:19.763 00:08:19.763 Suite: conn_suite 00:08:19.763 Test: read_task_split_in_order_case ...passed 00:08:19.763 Test: read_task_split_reverse_order_case ...passed 00:08:19.763 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:08:19.763 Test: process_non_read_task_completion_test ...passed 00:08:19.763 Test: free_tasks_on_connection ...passed 00:08:19.763 Test: free_tasks_with_queued_datain ...passed 00:08:19.763 Test: abort_queued_datain_task_test ...passed 00:08:19.763 Test: abort_queued_datain_tasks_test ...passed 00:08:19.763 00:08:19.763 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.763 suites 1 1 n/a 0 0 00:08:19.763 tests 8 8 8 0 0 00:08:19.763 asserts 230 230 230 0 n/a 00:08:19.763 00:08:19.763 Elapsed time = 0.000 seconds 00:08:19.763 16:25:51 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:08:19.763 00:08:19.763 00:08:19.763 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.763 http://cunit.sourceforge.net/ 00:08:19.763 00:08:19.763 00:08:19.763 Suite: iscsi_suite 00:08:19.763 Test: param_negotiation_test ...passed 00:08:19.763 Test: list_negotiation_test ...passed 00:08:19.763 Test: parse_valid_test ...passed 00:08:19.763 Test: parse_invalid_test ...[2024-07-13 16:25:51.035513] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:08:19.763 [2024-07-13 16:25:51.035927] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:08:19.763 [2024-07-13 16:25:51.036010] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:08:19.763 [2024-07-13 16:25:51.036117] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:08:19.763 [2024-07-13 16:25:51.036340] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:08:19.763 [2024-07-13 16:25:51.036426] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:08:19.763 [2024-07-13 16:25:51.036595] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:08:19.763 passed 00:08:19.763 00:08:19.763 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.763 suites 1 1 n/a 0 0 00:08:19.763 tests 4 4 4 0 0 00:08:19.763 asserts 161 161 161 0 n/a 00:08:19.763 00:08:19.763 Elapsed time = 0.006 seconds 00:08:19.763 16:25:51 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:08:19.763 00:08:19.763 00:08:19.763 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.763 http://cunit.sourceforge.net/ 00:08:19.763 00:08:19.763 00:08:19.763 Suite: iscsi_target_node_suite 00:08:19.763 Test: add_lun_test_cases ...[2024-07-13 16:25:51.079789] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:08:19.763 [2024-07-13 16:25:51.080199] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:08:19.763 [2024-07-13 16:25:51.080343] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:19.763 [2024-07-13 16:25:51.080390] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:19.763 [2024-07-13 16:25:51.080435] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:08:19.763 passed 00:08:19.763 Test: allow_any_allowed ...passed 00:08:19.763 Test: allow_ipv6_allowed ...passed 00:08:19.763 Test: allow_ipv6_denied ...passed 00:08:19.763 Test: allow_ipv6_invalid ...passed 00:08:19.763 Test: allow_ipv4_allowed ...passed 00:08:19.763 Test: allow_ipv4_denied ...passed 00:08:19.763 Test: allow_ipv4_invalid ...passed 00:08:19.763 Test: node_access_allowed ...passed 00:08:19.763 Test: node_access_denied_by_empty_netmask ...passed 00:08:19.763 Test: node_access_multi_initiator_groups_cases ...passed 00:08:19.763 Test: allow_iscsi_name_multi_maps_case ...passed 00:08:19.763 Test: chap_param_test_cases ...[2024-07-13 16:25:51.080915] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:08:19.763 [2024-07-13 16:25:51.080966] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:08:19.763 [2024-07-13 16:25:51.081040] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:08:19.763 passed 00:08:19.763 00:08:19.763 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.763 suites 1 1 n/a 0 0 00:08:19.763 tests 13 13 13 0 0 00:08:19.763 asserts 50 50 50 0 n/a 00:08:19.763 00:08:19.763 Elapsed time = 0.001 seconds 00:08:19.763 [2024-07-13 16:25:51.081094] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:08:19.763 [2024-07-13 16:25:51.081149] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:08:19.763 16:25:51 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:08:19.763 00:08:19.763 00:08:19.763 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.763 http://cunit.sourceforge.net/ 00:08:19.763 00:08:19.763 00:08:19.763 Suite: iscsi_suite 00:08:19.763 Test: op_login_check_target_test ...passed 00:08:19.763 Test: op_login_session_normal_test ...[2024-07-13 16:25:51.130288] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:08:19.763 [2024-07-13 16:25:51.130738] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:19.763 [2024-07-13 16:25:51.130796] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:19.763 [2024-07-13 16:25:51.130852] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:19.763 [2024-07-13 16:25:51.130916] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:08:19.763 [2024-07-13 16:25:51.131057] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:19.763 [2024-07-13 16:25:51.131185] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:08:19.763 passed 00:08:19.763 Test: maxburstlength_test ...[2024-07-13 16:25:51.131266] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:19.763 [2024-07-13 16:25:51.131627] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:19.763 passed 00:08:19.763 Test: underflow_for_read_transfer_test ...[2024-07-13 16:25:51.131710] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:08:19.763 passed 00:08:19.763 Test: underflow_for_zero_read_transfer_test ...passed 00:08:19.763 Test: underflow_for_request_sense_test ...passed 00:08:19.763 Test: underflow_for_check_condition_test ...passed 00:08:19.763 Test: add_transfer_task_test ...passed 00:08:19.763 Test: get_transfer_task_test ...passed 00:08:19.763 Test: del_transfer_task_test ...passed 00:08:19.763 Test: clear_all_transfer_tasks_test ...passed 00:08:19.763 Test: build_iovs_test ...passed 00:08:19.763 Test: build_iovs_with_md_test ...passed 00:08:19.763 Test: pdu_hdr_op_login_test ...[2024-07-13 16:25:51.133560] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:08:19.763 [2024-07-13 16:25:51.133716] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:08:19.763 [2024-07-13 16:25:51.133841] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:08:19.763 passed 00:08:19.763 Test: pdu_hdr_op_text_test ...[2024-07-13 16:25:51.133968] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:19.763 [2024-07-13 16:25:51.134085] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:08:19.763 [2024-07-13 16:25:51.134144] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:08:19.763 passed 00:08:19.763 Test: pdu_hdr_op_logout_test ...passed 00:08:19.763 Test: pdu_hdr_op_scsi_test ...[2024-07-13 16:25:51.134254] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:08:19.763 [2024-07-13 16:25:51.134452] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:19.763 [2024-07-13 16:25:51.134500] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:19.763 [2024-07-13 16:25:51.134566] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:08:19.763 [2024-07-13 16:25:51.134685] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:19.763 [2024-07-13 16:25:51.134809] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:08:19.763 [2024-07-13 16:25:51.135054] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:08:19.763 passed 00:08:19.763 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-13 16:25:51.135185] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:08:19.763 [2024-07-13 16:25:51.135296] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:08:19.763 passed 00:08:19.763 Test: pdu_hdr_op_nopout_test ...[2024-07-13 16:25:51.135582] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:08:19.763 [2024-07-13 16:25:51.135701] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:19.763 [2024-07-13 16:25:51.135747] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:19.763 [2024-07-13 16:25:51.135800] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:08:19.763 passed 00:08:19.763 Test: pdu_hdr_op_data_test ...[2024-07-13 16:25:51.135849] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:08:19.763 [2024-07-13 16:25:51.135932] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:19.763 [2024-07-13 16:25:51.136011] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:19.763 [2024-07-13 16:25:51.136083] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:08:19.764 [2024-07-13 16:25:51.136161] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:08:19.764 [2024-07-13 16:25:51.136492] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:08:19.764 [2024-07-13 16:25:51.136549] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:08:19.764 passed 00:08:19.764 Test: empty_text_with_cbit_test ...passed 00:08:19.764 Test: pdu_payload_read_test ...[2024-07-13 16:25:51.138841] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:08:19.764 passed 00:08:19.764 Test: data_out_pdu_sequence_test ...passed 00:08:19.764 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:08:19.764 00:08:19.764 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.764 suites 1 1 n/a 0 0 00:08:19.764 tests 24 24 24 0 0 00:08:19.764 asserts 150253 150253 150253 0 n/a 00:08:19.764 00:08:19.764 Elapsed time = 0.018 seconds 00:08:19.764 16:25:51 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:08:19.764 00:08:19.764 00:08:19.764 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.764 http://cunit.sourceforge.net/ 00:08:19.764 00:08:19.764 00:08:19.764 Suite: init_grp_suite 00:08:19.764 Test: create_initiator_group_success_case ...passed 00:08:19.764 Test: find_initiator_group_success_case ...passed 00:08:19.764 Test: register_initiator_group_twice_case ...passed 00:08:19.764 Test: add_initiator_name_success_case ...passed 00:08:19.764 Test: add_initiator_name_fail_case ...[2024-07-13 16:25:51.198483] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:08:19.764 passed 00:08:19.764 Test: delete_all_initiator_names_success_case ...passed 00:08:19.764 Test: add_netmask_success_case ...passed 00:08:19.764 Test: add_netmask_fail_case ...[2024-07-13 16:25:51.199066] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:08:19.764 passed 00:08:19.764 Test: delete_all_netmasks_success_case ...passed 00:08:19.764 Test: initiator_name_overwrite_all_to_any_case ...passed 00:08:19.764 Test: netmask_overwrite_all_to_any_case ...passed 00:08:19.764 Test: add_delete_initiator_names_case ...passed 00:08:19.764 Test: add_duplicated_initiator_names_case ...passed 00:08:19.764 Test: delete_nonexisting_initiator_names_case ...passed 00:08:19.764 Test: add_delete_netmasks_case ...passed 00:08:19.764 Test: add_duplicated_netmasks_case ...passed 00:08:19.764 Test: delete_nonexisting_netmasks_case ...passed 00:08:19.764 00:08:19.764 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.764 suites 1 1 n/a 0 0 00:08:19.764 tests 17 17 17 0 0 00:08:19.764 asserts 108 108 108 0 n/a 00:08:19.764 00:08:19.764 Elapsed time = 0.001 seconds 00:08:19.764 16:25:51 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:08:20.024 00:08:20.024 00:08:20.024 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.024 http://cunit.sourceforge.net/ 00:08:20.024 00:08:20.024 00:08:20.024 Suite: portal_grp_suite 00:08:20.024 Test: portal_create_ipv4_normal_case ...passed 00:08:20.024 Test: portal_create_ipv6_normal_case ...passed 00:08:20.024 Test: portal_create_ipv4_wildcard_case ...passed 00:08:20.024 Test: portal_create_ipv6_wildcard_case ...passed 00:08:20.024 Test: portal_create_twice_case ...[2024-07-13 16:25:51.242193] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:08:20.024 passed 00:08:20.024 Test: portal_grp_register_unregister_case ...passed 00:08:20.024 Test: portal_grp_register_twice_case ...passed 00:08:20.024 Test: portal_grp_add_delete_case ...passed 00:08:20.024 Test: portal_grp_add_delete_twice_case ...passed 00:08:20.024 00:08:20.024 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.024 suites 1 1 n/a 0 0 00:08:20.024 tests 9 9 9 0 0 00:08:20.024 asserts 44 44 44 0 n/a 00:08:20.024 00:08:20.024 Elapsed time = 0.004 seconds 00:08:20.024 00:08:20.024 real 0m0.302s 00:08:20.024 user 0m0.158s 00:08:20.024 sys 0m0.148s 00:08:20.024 16:25:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.024 16:25:51 -- common/autotest_common.sh@10 -- # set +x 00:08:20.024 ************************************ 00:08:20.024 END TEST unittest_iscsi 00:08:20.024 ************************************ 00:08:20.024 16:25:51 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:08:20.024 16:25:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.024 16:25:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.024 16:25:51 -- common/autotest_common.sh@10 -- # set +x 00:08:20.024 ************************************ 00:08:20.024 START TEST unittest_json 00:08:20.024 ************************************ 00:08:20.024 16:25:51 -- common/autotest_common.sh@1104 -- # unittest_json 00:08:20.024 16:25:51 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:08:20.024 00:08:20.024 00:08:20.024 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.024 http://cunit.sourceforge.net/ 00:08:20.024 00:08:20.024 00:08:20.024 Suite: json 00:08:20.024 Test: test_parse_literal ...passed 00:08:20.024 Test: test_parse_string_simple ...passed 00:08:20.024 Test: test_parse_string_control_chars ...passed 00:08:20.024 Test: test_parse_string_utf8 ...passed 00:08:20.024 Test: test_parse_string_escapes_twochar ...passed 00:08:20.024 Test: test_parse_string_escapes_unicode ...passed 00:08:20.024 Test: test_parse_number ...passed 00:08:20.024 Test: test_parse_array ...passed 00:08:20.024 Test: test_parse_object ...passed 00:08:20.024 Test: test_parse_nesting ...passed 00:08:20.024 Test: test_parse_comment ...passed 00:08:20.024 00:08:20.024 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.024 suites 1 1 n/a 0 0 00:08:20.024 tests 11 11 11 0 0 00:08:20.024 asserts 1516 1516 1516 0 n/a 00:08:20.024 00:08:20.024 Elapsed time = 0.002 seconds 00:08:20.024 16:25:51 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:08:20.024 00:08:20.024 00:08:20.024 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.024 http://cunit.sourceforge.net/ 00:08:20.024 00:08:20.024 00:08:20.024 Suite: json 00:08:20.024 Test: test_strequal ...passed 00:08:20.024 Test: test_num_to_uint16 ...passed 00:08:20.024 Test: test_num_to_int32 ...passed 00:08:20.024 Test: test_num_to_uint64 ...passed 00:08:20.024 Test: test_decode_object ...passed 00:08:20.024 Test: test_decode_array ...passed 00:08:20.024 Test: test_decode_bool ...passed 00:08:20.024 Test: test_decode_uint16 ...passed 00:08:20.024 Test: test_decode_int32 ...passed 00:08:20.024 Test: test_decode_uint32 ...passed 00:08:20.024 Test: test_decode_uint64 ...passed 00:08:20.024 Test: test_decode_string ...passed 00:08:20.024 Test: test_decode_uuid ...passed 00:08:20.024 Test: test_find ...passed 00:08:20.024 Test: test_find_array ...passed 00:08:20.024 Test: test_iterating ...passed 00:08:20.024 Test: test_free_object ...passed 00:08:20.024 00:08:20.024 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.024 suites 1 1 n/a 0 0 00:08:20.024 tests 17 17 17 0 0 00:08:20.024 asserts 236 236 236 0 n/a 00:08:20.024 00:08:20.024 Elapsed time = 0.001 seconds 00:08:20.024 16:25:51 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:08:20.024 00:08:20.024 00:08:20.024 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.024 http://cunit.sourceforge.net/ 00:08:20.024 00:08:20.024 00:08:20.024 Suite: json 00:08:20.024 Test: test_write_literal ...passed 00:08:20.024 Test: test_write_string_simple ...passed 00:08:20.024 Test: test_write_string_escapes ...passed 00:08:20.024 Test: test_write_string_utf16le ...passed 00:08:20.024 Test: test_write_number_int32 ...passed 00:08:20.024 Test: test_write_number_uint32 ...passed 00:08:20.024 Test: test_write_number_uint128 ...passed 00:08:20.024 Test: test_write_string_number_uint128 ...passed 00:08:20.024 Test: test_write_number_int64 ...passed 00:08:20.024 Test: test_write_number_uint64 ...passed 00:08:20.024 Test: test_write_number_double ...passed 00:08:20.024 Test: test_write_uuid ...passed 00:08:20.024 Test: test_write_array ...passed 00:08:20.024 Test: test_write_object ...passed 00:08:20.024 Test: test_write_nesting ...passed 00:08:20.024 Test: test_write_val ...passed 00:08:20.024 00:08:20.024 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.024 suites 1 1 n/a 0 0 00:08:20.024 tests 16 16 16 0 0 00:08:20.024 asserts 918 918 918 0 n/a 00:08:20.024 00:08:20.024 Elapsed time = 0.005 seconds 00:08:20.024 16:25:51 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:08:20.024 00:08:20.024 00:08:20.024 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.024 http://cunit.sourceforge.net/ 00:08:20.024 00:08:20.024 00:08:20.024 Suite: jsonrpc 00:08:20.024 Test: test_parse_request ...passed 00:08:20.024 Test: test_parse_request_streaming ...passed 00:08:20.024 00:08:20.024 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.024 suites 1 1 n/a 0 0 00:08:20.024 tests 2 2 2 0 0 00:08:20.024 asserts 289 289 289 0 n/a 00:08:20.024 00:08:20.024 Elapsed time = 0.005 seconds 00:08:20.353 00:08:20.353 real 0m0.173s 00:08:20.353 user 0m0.084s 00:08:20.353 sys 0m0.091s 00:08:20.353 16:25:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.353 ************************************ 00:08:20.353 END TEST unittest_json 00:08:20.353 16:25:51 -- common/autotest_common.sh@10 -- # set +x 00:08:20.353 ************************************ 00:08:20.353 16:25:51 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:08:20.353 16:25:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.353 16:25:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.353 16:25:51 -- common/autotest_common.sh@10 -- # set +x 00:08:20.353 ************************************ 00:08:20.353 START TEST unittest_rpc 00:08:20.353 ************************************ 00:08:20.353 16:25:51 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:08:20.353 16:25:51 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:08:20.353 00:08:20.353 00:08:20.353 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.353 http://cunit.sourceforge.net/ 00:08:20.353 00:08:20.353 00:08:20.353 Suite: rpc 00:08:20.353 Test: test_jsonrpc_handler ...passed 00:08:20.353 Test: test_spdk_rpc_is_method_allowed ...passed 00:08:20.353 Test: test_rpc_get_methods ...passed 00:08:20.353 Test: test_rpc_spdk_get_version ...passed 00:08:20.353 Test: test_spdk_rpc_listen_close ...passed[2024-07-13 16:25:51.601054] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:08:20.353 00:08:20.353 00:08:20.353 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.353 suites 1 1 n/a 0 0 00:08:20.353 tests 5 5 5 0 0 00:08:20.353 asserts 20 20 20 0 n/a 00:08:20.353 00:08:20.353 Elapsed time = 0.000 seconds 00:08:20.353 00:08:20.353 real 0m0.041s 00:08:20.353 user 0m0.009s 00:08:20.353 sys 0m0.032s 00:08:20.353 16:25:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.353 16:25:51 -- common/autotest_common.sh@10 -- # set +x 00:08:20.353 ************************************ 00:08:20.353 END TEST unittest_rpc 00:08:20.353 ************************************ 00:08:20.353 16:25:51 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:20.353 16:25:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.353 16:25:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.353 16:25:51 -- common/autotest_common.sh@10 -- # set +x 00:08:20.353 ************************************ 00:08:20.353 START TEST unittest_notify 00:08:20.353 ************************************ 00:08:20.353 16:25:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:20.353 00:08:20.353 00:08:20.353 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.353 http://cunit.sourceforge.net/ 00:08:20.353 00:08:20.353 00:08:20.353 Suite: app_suite 00:08:20.353 Test: notify ...passed 00:08:20.353 00:08:20.353 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.353 suites 1 1 n/a 0 0 00:08:20.353 tests 1 1 1 0 0 00:08:20.353 asserts 13 13 13 0 n/a 00:08:20.353 00:08:20.353 Elapsed time = 0.000 seconds 00:08:20.353 00:08:20.353 real 0m0.039s 00:08:20.353 user 0m0.022s 00:08:20.353 sys 0m0.018s 00:08:20.353 16:25:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.353 16:25:51 -- common/autotest_common.sh@10 -- # set +x 00:08:20.353 ************************************ 00:08:20.353 END TEST unittest_notify 00:08:20.353 ************************************ 00:08:20.353 16:25:51 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:08:20.353 16:25:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.353 16:25:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.353 16:25:51 -- common/autotest_common.sh@10 -- # set +x 00:08:20.353 ************************************ 00:08:20.353 START TEST unittest_nvme 00:08:20.353 ************************************ 00:08:20.353 16:25:51 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:08:20.353 16:25:51 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:08:20.613 00:08:20.613 00:08:20.613 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.613 http://cunit.sourceforge.net/ 00:08:20.613 00:08:20.613 00:08:20.613 Suite: nvme 00:08:20.613 Test: test_opc_data_transfer ...passed 00:08:20.613 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:08:20.613 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:08:20.613 Test: test_trid_parse_and_compare ...[2024-07-13 16:25:51.819521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:08:20.613 [2024-07-13 16:25:51.819932] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:20.613 [2024-07-13 16:25:51.820078] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:08:20.613 [2024-07-13 16:25:51.820138] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:20.613 [2024-07-13 16:25:51.820190] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:08:20.613 [2024-07-13 16:25:51.820533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:20.613 passed 00:08:20.613 Test: test_trid_trtype_str ...passed 00:08:20.613 Test: test_trid_adrfam_str ...passed 00:08:20.613 Test: test_nvme_ctrlr_probe ...[2024-07-13 16:25:51.820785] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:20.613 passed 00:08:20.613 Test: test_spdk_nvme_probe ...[2024-07-13 16:25:51.820916] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:20.613 [2024-07-13 16:25:51.820962] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:20.613 [2024-07-13 16:25:51.821101] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:08:20.613 [2024-07-13 16:25:51.821162] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:20.613 passed 00:08:20.613 Test: test_spdk_nvme_connect ...[2024-07-13 16:25:51.821270] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:08:20.613 [2024-07-13 16:25:51.821705] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:20.613 passed 00:08:20.613 Test: test_nvme_ctrlr_probe_internal ...[2024-07-13 16:25:51.821789] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:08:20.613 [2024-07-13 16:25:51.821945] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:20.613 [2024-07-13 16:25:51.821999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:08:20.613 passed 00:08:20.613 Test: test_nvme_init_controllers ...passed 00:08:20.613 Test: test_nvme_driver_init ...[2024-07-13 16:25:51.822097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:08:20.613 [2024-07-13 16:25:51.822210] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:08:20.613 [2024-07-13 16:25:51.822261] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:20.613 [2024-07-13 16:25:51.931539] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:08:20.613 passed 00:08:20.613 Test: test_spdk_nvme_detach ...passed 00:08:20.613 Test: test_nvme_completion_poll_cb ...passed 00:08:20.613 Test: test_nvme_user_copy_cmd_complete ...[2024-07-13 16:25:51.931793] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:08:20.613 passed 00:08:20.613 Test: test_nvme_allocate_request_null ...passed 00:08:20.613 Test: test_nvme_allocate_request ...passed 00:08:20.613 Test: test_nvme_free_request ...passed 00:08:20.613 Test: test_nvme_allocate_request_user_copy ...passed 00:08:20.613 Test: test_nvme_robust_mutex_init_shared ...passed 00:08:20.613 Test: test_nvme_request_check_timeout ...passed 00:08:20.613 Test: test_nvme_wait_for_completion ...passed 00:08:20.613 Test: test_spdk_nvme_parse_func ...passed 00:08:20.613 Test: test_spdk_nvme_detach_async ...passed 00:08:20.613 Test: test_nvme_parse_addr ...[2024-07-13 16:25:51.932925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:08:20.613 passed 00:08:20.613 00:08:20.613 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.613 suites 1 1 n/a 0 0 00:08:20.613 tests 25 25 25 0 0 00:08:20.613 asserts 326 326 326 0 n/a 00:08:20.613 00:08:20.613 Elapsed time = 0.007 seconds 00:08:20.613 16:25:51 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:08:20.613 00:08:20.613 00:08:20.613 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.613 http://cunit.sourceforge.net/ 00:08:20.613 00:08:20.613 00:08:20.613 Suite: nvme_ctrlr 00:08:20.613 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-13 16:25:51.981465] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:20.613 passed 00:08:20.613 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-13 16:25:51.983346] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:20.613 passed 00:08:20.613 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-13 16:25:51.984621] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:20.613 passed 00:08:20.613 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-13 16:25:51.985845] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:20.613 passed 00:08:20.613 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-13 16:25:51.987118] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:20.613 [2024-07-13 16:25:51.988484] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 16:25:51.989946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 16:25:51.991213] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:20.614 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-13 16:25:51.993831] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:20.614 [2024-07-13 16:25:51.996307] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 16:25:51.997581] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:20.614 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-13 16:25:52.000175] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:20.614 [2024-07-13 16:25:52.001442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 16:25:52.003862] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:20.614 Test: test_nvme_ctrlr_init_delay ...[2024-07-13 16:25:52.006446] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:20.614 passed 00:08:20.614 Test: test_alloc_io_qpair_rr_1 ...[2024-07-13 16:25:52.007959] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:20.614 [2024-07-13 16:25:52.008205] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:20.614 [2024-07-13 16:25:52.008574] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:20.614 [2024-07-13 16:25:52.008736] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:20.614 [2024-07-13 16:25:52.008851] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:20.614 passed 00:08:20.614 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:08:20.614 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:08:20.614 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-13 16:25:52.009140] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:20.614 passed 00:08:20.614 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-13 16:25:52.009552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:20.614 [2024-07-13 16:25:52.009769] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:20.614 passed 00:08:20.614 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-13 16:25:52.010129] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:08:20.614 [2024-07-13 16:25:52.010349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:20.614 [2024-07-13 16:25:52.010499] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:08:20.614 [2024-07-13 16:25:52.010610] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:20.614 passed 00:08:20.614 Test: test_nvme_ctrlr_fail ...[2024-07-13 16:25:52.010716] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:08:20.614 passed 00:08:20.614 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:08:20.614 Test: test_nvme_ctrlr_set_supported_features ...passed 00:08:20.614 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:08:20.614 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-13 16:25:52.011108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 passed 00:08:21.182 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:08:21.182 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:08:21.182 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:08:21.182 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-13 16:25:52.363836] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 passed 00:08:21.182 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-13 16:25:52.370728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 passed 00:08:21.182 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-13 16:25:52.371911] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 [2024-07-13 16:25:52.371961] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:08:21.182 passed 00:08:21.182 Test: test_alloc_io_qpair_fail ...[2024-07-13 16:25:52.373077] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 passed 00:08:21.182 Test: test_nvme_ctrlr_add_remove_process ...passed 00:08:21.182 Test: test_nvme_ctrlr_set_arbitration_feature ...passed[2024-07-13 16:25:52.373181] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:08:21.182 00:08:21.182 Test: test_nvme_ctrlr_set_state ...passed 00:08:21.182 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-13 16:25:52.373324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:08:21.182 [2024-07-13 16:25:52.373364] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 passed 00:08:21.182 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-13 16:25:52.393652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 passed 00:08:21.182 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-13 16:25:52.431806] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 passed 00:08:21.182 Test: test_nvme_ctrlr_reset ...[2024-07-13 16:25:52.433368] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 passed 00:08:21.182 Test: test_nvme_ctrlr_aer_callback ...[2024-07-13 16:25:52.433700] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 passed 00:08:21.182 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-13 16:25:52.435070] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 passed 00:08:21.182 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:08:21.182 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:08:21.182 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-13 16:25:52.436738] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 passed 00:08:21.182 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:08:21.182 Test: test_nvme_ctrlr_ana_resize ...[2024-07-13 16:25:52.438073] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 passed 00:08:21.182 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:08:21.182 Test: test_nvme_transport_ctrlr_ready ...passed 00:08:21.182 Test: test_nvme_ctrlr_disable ...[2024-07-13 16:25:52.439589] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:08:21.182 [2024-07-13 16:25:52.439629] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:08:21.182 [2024-07-13 16:25:52.439666] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:21.182 passed 00:08:21.182 00:08:21.182 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.182 suites 1 1 n/a 0 0 00:08:21.182 tests 43 43 43 0 0 00:08:21.182 asserts 10418 10418 10418 0 n/a 00:08:21.182 00:08:21.182 Elapsed time = 0.419 seconds 00:08:21.182 16:25:52 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:08:21.182 00:08:21.182 00:08:21.182 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.182 http://cunit.sourceforge.net/ 00:08:21.182 00:08:21.182 00:08:21.182 Suite: nvme_ctrlr_cmd 00:08:21.182 Test: test_get_log_pages ...passed 00:08:21.182 Test: test_set_feature_cmd ...passed 00:08:21.182 Test: test_set_feature_ns_cmd ...passed 00:08:21.182 Test: test_get_feature_cmd ...passed 00:08:21.182 Test: test_get_feature_ns_cmd ...passed 00:08:21.182 Test: test_abort_cmd ...passed 00:08:21.182 Test: test_set_host_id_cmds ...passed 00:08:21.182 Test: test_io_cmd_raw_no_payload_build ...passed 00:08:21.182 Test: test_io_raw_cmd ...passed 00:08:21.182 Test: test_io_raw_cmd_with_md ...passed 00:08:21.182 Test: test_namespace_attach ...passed 00:08:21.182 Test: test_namespace_detach ...passed 00:08:21.182 Test: test_namespace_create ...[2024-07-13 16:25:52.503000] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:08:21.182 passed 00:08:21.182 Test: test_namespace_delete ...passed 00:08:21.182 Test: test_doorbell_buffer_config ...passed 00:08:21.182 Test: test_format_nvme ...passed 00:08:21.182 Test: test_fw_commit ...passed 00:08:21.182 Test: test_fw_image_download ...passed 00:08:21.182 Test: test_sanitize ...passed 00:08:21.182 Test: test_directive ...passed 00:08:21.182 Test: test_nvme_request_add_abort ...passed 00:08:21.182 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:08:21.182 Test: test_nvme_ctrlr_cmd_identify ...passed 00:08:21.182 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:08:21.182 00:08:21.182 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.182 suites 1 1 n/a 0 0 00:08:21.182 tests 24 24 24 0 0 00:08:21.182 asserts 198 198 198 0 n/a 00:08:21.182 00:08:21.182 Elapsed time = 0.001 seconds 00:08:21.182 16:25:52 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:08:21.182 00:08:21.182 00:08:21.182 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.182 http://cunit.sourceforge.net/ 00:08:21.182 00:08:21.182 00:08:21.182 Suite: nvme_ctrlr_cmd 00:08:21.182 Test: test_geometry_cmd ...passed 00:08:21.182 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:08:21.182 00:08:21.182 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.182 suites 1 1 n/a 0 0 00:08:21.182 tests 2 2 2 0 0 00:08:21.182 asserts 7 7 7 0 n/a 00:08:21.182 00:08:21.182 Elapsed time = 0.000 seconds 00:08:21.182 16:25:52 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:08:21.182 00:08:21.182 00:08:21.182 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.182 http://cunit.sourceforge.net/ 00:08:21.182 00:08:21.182 00:08:21.182 Suite: nvme 00:08:21.182 Test: test_nvme_ns_construct ...passed 00:08:21.182 Test: test_nvme_ns_uuid ...passed 00:08:21.182 Test: test_nvme_ns_csi ...passed 00:08:21.182 Test: test_nvme_ns_data ...passed 00:08:21.182 Test: test_nvme_ns_set_identify_data ...passed 00:08:21.182 Test: test_spdk_nvme_ns_get_values ...passed 00:08:21.182 Test: test_spdk_nvme_ns_is_active ...passed 00:08:21.182 Test: spdk_nvme_ns_supports ...passed 00:08:21.182 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:08:21.182 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:08:21.182 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:08:21.182 Test: test_nvme_ns_find_id_desc ...passed 00:08:21.182 00:08:21.182 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.182 suites 1 1 n/a 0 0 00:08:21.182 tests 12 12 12 0 0 00:08:21.182 asserts 83 83 83 0 n/a 00:08:21.182 00:08:21.182 Elapsed time = 0.000 seconds 00:08:21.182 16:25:52 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:08:21.182 00:08:21.182 00:08:21.182 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.182 http://cunit.sourceforge.net/ 00:08:21.182 00:08:21.182 00:08:21.182 Suite: nvme_ns_cmd 00:08:21.182 Test: split_test ...passed 00:08:21.183 Test: split_test2 ...passed 00:08:21.183 Test: split_test3 ...passed 00:08:21.183 Test: split_test4 ...passed 00:08:21.183 Test: test_nvme_ns_cmd_flush ...passed 00:08:21.183 Test: test_nvme_ns_cmd_dataset_management ...passed 00:08:21.183 Test: test_nvme_ns_cmd_copy ...passed 00:08:21.183 Test: test_io_flags ...[2024-07-13 16:25:52.623044] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:08:21.183 passed 00:08:21.183 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:08:21.183 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:08:21.183 Test: test_nvme_ns_cmd_reservation_register ...passed 00:08:21.183 Test: test_nvme_ns_cmd_reservation_release ...passed 00:08:21.183 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:08:21.183 Test: test_nvme_ns_cmd_reservation_report ...passed 00:08:21.183 Test: test_cmd_child_request ...passed 00:08:21.183 Test: test_nvme_ns_cmd_readv ...passed 00:08:21.183 Test: test_nvme_ns_cmd_read_with_md ...passed 00:08:21.183 Test: test_nvme_ns_cmd_writev ...[2024-07-13 16:25:52.624227] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:08:21.183 passed 00:08:21.183 Test: test_nvme_ns_cmd_write_with_md ...passed 00:08:21.183 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:08:21.183 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:08:21.183 Test: test_nvme_ns_cmd_comparev ...passed 00:08:21.183 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:08:21.183 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:08:21.183 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:08:21.183 Test: test_nvme_ns_cmd_setup_request ...passed 00:08:21.183 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:08:21.183 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:08:21.183 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-13 16:25:52.626097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:21.183 [2024-07-13 16:25:52.626189] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:21.183 passed 00:08:21.183 Test: test_nvme_ns_cmd_verify ...passed 00:08:21.183 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:08:21.183 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:08:21.183 00:08:21.183 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.183 suites 1 1 n/a 0 0 00:08:21.183 tests 32 32 32 0 0 00:08:21.183 asserts 550 550 550 0 n/a 00:08:21.183 00:08:21.183 Elapsed time = 0.004 seconds 00:08:21.183 16:25:52 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:08:21.443 00:08:21.443 00:08:21.443 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.443 http://cunit.sourceforge.net/ 00:08:21.443 00:08:21.443 00:08:21.443 Suite: nvme_ns_cmd 00:08:21.443 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:08:21.443 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:08:21.443 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:08:21.443 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:08:21.443 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:08:21.443 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:08:21.443 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:08:21.443 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:08:21.443 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:08:21.443 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:08:21.443 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:08:21.443 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:08:21.443 00:08:21.443 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.443 suites 1 1 n/a 0 0 00:08:21.443 tests 12 12 12 0 0 00:08:21.443 asserts 123 123 123 0 n/a 00:08:21.443 00:08:21.443 Elapsed time = 0.001 seconds 00:08:21.443 16:25:52 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:08:21.443 00:08:21.443 00:08:21.443 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.443 http://cunit.sourceforge.net/ 00:08:21.443 00:08:21.443 00:08:21.443 Suite: nvme_qpair 00:08:21.443 Test: test3 ...passed 00:08:21.443 Test: test_ctrlr_failed ...passed 00:08:21.443 Test: struct_packing ...passed 00:08:21.443 Test: test_nvme_qpair_process_completions ...[2024-07-13 16:25:52.705565] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:21.443 [2024-07-13 16:25:52.705947] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:21.443 passed 00:08:21.443 Test: test_nvme_completion_is_retry ...passed 00:08:21.443 Test: test_get_status_string ...passed 00:08:21.443 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:08:21.443 Test: test_nvme_qpair_submit_request ...[2024-07-13 16:25:52.706012] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:21.443 [2024-07-13 16:25:52.706115] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:21.443 passed 00:08:21.443 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:08:21.443 Test: test_nvme_qpair_manual_complete_request ...passed 00:08:21.443 Test: test_nvme_qpair_init_deinit ...passed 00:08:21.443 Test: test_nvme_get_sgl_print_info ...passed 00:08:21.443 00:08:21.443 [2024-07-13 16:25:52.706563] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:21.443 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.443 suites 1 1 n/a 0 0 00:08:21.443 tests 12 12 12 0 0 00:08:21.443 asserts 154 154 154 0 n/a 00:08:21.443 00:08:21.443 Elapsed time = 0.001 seconds 00:08:21.443 16:25:52 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:08:21.443 00:08:21.443 00:08:21.443 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.443 http://cunit.sourceforge.net/ 00:08:21.443 00:08:21.443 00:08:21.443 Suite: nvme_pcie 00:08:21.443 Test: test_prp_list_append ...[2024-07-13 16:25:52.751781] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:21.443 [2024-07-13 16:25:52.752206] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:08:21.443 [2024-07-13 16:25:52.752283] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:08:21.443 [2024-07-13 16:25:52.752579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:21.443 [2024-07-13 16:25:52.752683] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:21.443 passed 00:08:21.443 Test: test_nvme_pcie_hotplug_monitor ...passed 00:08:21.443 Test: test_shadow_doorbell_update ...passed 00:08:21.443 Test: test_build_contig_hw_sgl_request ...passed 00:08:21.443 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:08:21.443 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:08:21.443 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:08:21.443 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:08:21.443 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:08:21.443 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...[2024-07-13 16:25:52.752865] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:21.443 passed 00:08:21.443 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:08:21.443 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:08:21.443 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-13 16:25:52.752955] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:08:21.443 [2024-07-13 16:25:52.753038] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:08:21.443 [2024-07-13 16:25:52.753098] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:08:21.443 passed 00:08:21.443 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:08:21.443 00:08:21.443 [2024-07-13 16:25:52.753178] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:08:21.443 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.443 suites 1 1 n/a 0 0 00:08:21.443 tests 14 14 14 0 0 00:08:21.443 asserts 235 235 235 0 n/a 00:08:21.443 00:08:21.443 Elapsed time = 0.001 seconds 00:08:21.443 16:25:52 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:08:21.443 00:08:21.443 00:08:21.444 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.444 http://cunit.sourceforge.net/ 00:08:21.444 00:08:21.444 00:08:21.444 Suite: nvme_ns_cmd 00:08:21.444 Test: nvme_poll_group_create_test ...passed 00:08:21.444 Test: nvme_poll_group_add_remove_test ...passed 00:08:21.444 Test: nvme_poll_group_process_completions ...passed 00:08:21.444 Test: nvme_poll_group_destroy_test ...passed 00:08:21.444 Test: nvme_poll_group_get_free_stats ...passed 00:08:21.444 00:08:21.444 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.444 suites 1 1 n/a 0 0 00:08:21.444 tests 5 5 5 0 0 00:08:21.444 asserts 75 75 75 0 n/a 00:08:21.444 00:08:21.444 Elapsed time = 0.001 seconds 00:08:21.444 16:25:52 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:08:21.444 00:08:21.444 00:08:21.444 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.444 http://cunit.sourceforge.net/ 00:08:21.444 00:08:21.444 00:08:21.444 Suite: nvme_quirks 00:08:21.444 Test: test_nvme_quirks_striping ...passed 00:08:21.444 00:08:21.444 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.444 suites 1 1 n/a 0 0 00:08:21.444 tests 1 1 1 0 0 00:08:21.444 asserts 5 5 5 0 n/a 00:08:21.444 00:08:21.444 Elapsed time = 0.000 seconds 00:08:21.444 16:25:52 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:08:21.444 00:08:21.444 00:08:21.444 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.444 http://cunit.sourceforge.net/ 00:08:21.444 00:08:21.444 00:08:21.444 Suite: nvme_tcp 00:08:21.444 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:08:21.444 Test: test_nvme_tcp_build_iovs ...passed 00:08:21.444 Test: test_nvme_tcp_build_sgl_request ...[2024-07-13 16:25:52.871500] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffe1e5e61b0, and the iovcnt=16, remaining_size=28672 00:08:21.444 passed 00:08:21.444 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:08:21.444 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:08:21.444 Test: test_nvme_tcp_req_complete_safe ...passed 00:08:21.444 Test: test_nvme_tcp_req_get ...passed 00:08:21.444 Test: test_nvme_tcp_req_init ...passed 00:08:21.444 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:08:21.444 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:08:21.444 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-07-13 16:25:52.872094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e7ed0 is same with the state(6) to be set 00:08:21.444 passed 00:08:21.444 Test: test_nvme_tcp_alloc_reqs ...passed 00:08:21.444 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-13 16:25:52.872443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e7060 is same with the state(5) to be set 00:08:21.444 passed 00:08:21.444 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-13 16:25:52.872498] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffe1e5e7b90 00:08:21.444 [2024-07-13 16:25:52.872547] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:08:21.444 [2024-07-13 16:25:52.872635] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e7520 is same with the state(5) to be set 00:08:21.444 [2024-07-13 16:25:52.872689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:08:21.444 [2024-07-13 16:25:52.872781] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e7520 is same with the state(5) to be set 00:08:21.444 [2024-07-13 16:25:52.872829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:08:21.444 [2024-07-13 16:25:52.872862] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e7520 is same with the state(5) to be set 00:08:21.444 [2024-07-13 16:25:52.872899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e7520 is same with the state(5) to be set 00:08:21.444 [2024-07-13 16:25:52.872943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e7520 is same with the state(5) to be set 00:08:21.444 [2024-07-13 16:25:52.873002] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e7520 is same with the state(5) to be set 00:08:21.444 [2024-07-13 16:25:52.873039] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e7520 is same with the state(5) to be set 00:08:21.444 passed 00:08:21.444 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-13 16:25:52.873086] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e7520 is same with the state(5) to be set 00:08:21.444 [2024-07-13 16:25:52.873240] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:08:21.444 [2024-07-13 16:25:52.873294] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:21.444 [2024-07-13 16:25:52.873553] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:21.444 passed 00:08:21.444 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:08:21.444 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-13 16:25:52.873666] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffe1e5e76d0): PDU Sequence Error 00:08:21.444 passed 00:08:21.444 Test: test_nvme_tcp_icresp_handle ...[2024-07-13 16:25:52.873776] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:08:21.444 [2024-07-13 16:25:52.873814] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:08:21.444 [2024-07-13 16:25:52.873856] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e7070 is same with the state(5) to be set 00:08:21.444 [2024-07-13 16:25:52.873909] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:08:21.444 [2024-07-13 16:25:52.873954] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e7070 is same with the state(5) to be set 00:08:21.444 passed 00:08:21.444 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:08:21.444 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-13 16:25:52.874011] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e7070 is same with the state(0) to be set 00:08:21.444 [2024-07-13 16:25:52.874064] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffe1e5e7b90): PDU Sequence Error 00:08:21.444 [2024-07-13 16:25:52.874137] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffe1e5e6350 00:08:21.444 passed 00:08:21.444 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:08:21.444 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-13 16:25:52.874304] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffe1e5e59d0, errno=0, rc=0 00:08:21.444 [2024-07-13 16:25:52.874346] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e59d0 is same with the state(5) to be set 00:08:21.444 [2024-07-13 16:25:52.874437] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe1e5e59d0 is same with the state(5) to be set 00:08:21.444 passed 00:08:21.444 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-13 16:25:52.874485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe1e5e59d0 (0): Success 00:08:21.444 [2024-07-13 16:25:52.874525] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe1e5e59d0 (0): Success 00:08:21.703 [2024-07-13 16:25:53.001166] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:21.703 [2024-07-13 16:25:53.001316] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:21.703 passed 00:08:21.703 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:08:21.703 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:08:21.703 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-13 16:25:53.001516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:21.703 [2024-07-13 16:25:53.001563] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:21.703 [2024-07-13 16:25:53.001771] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:21.703 [2024-07-13 16:25:53.001878] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:21.703 [2024-07-13 16:25:53.001983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:08:21.703 [2024-07-13 16:25:53.002048] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:21.703 passed 00:08:21.703 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-13 16:25:53.002155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:08:21.703 [2024-07-13 16:25:53.002218] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:21.703 passed 00:08:21.703 00:08:21.703 [2024-07-13 16:25:53.002349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:08:21.703 [2024-07-13 16:25:53.002396] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:08:21.703 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.703 suites 1 1 n/a 0 0 00:08:21.703 tests 27 27 27 0 0 00:08:21.703 asserts 624 624 624 0 n/a 00:08:21.703 00:08:21.703 Elapsed time = 0.131 seconds 00:08:21.703 16:25:53 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:08:21.703 00:08:21.703 00:08:21.703 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.703 http://cunit.sourceforge.net/ 00:08:21.703 00:08:21.703 00:08:21.703 Suite: nvme_transport 00:08:21.703 Test: test_nvme_get_transport ...passed 00:08:21.703 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:08:21.703 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:08:21.703 Test: test_nvme_transport_poll_group_add_remove ...passed 00:08:21.703 Test: test_ctrlr_get_memory_domains ...passed 00:08:21.703 00:08:21.703 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.703 suites 1 1 n/a 0 0 00:08:21.703 tests 5 5 5 0 0 00:08:21.703 asserts 28 28 28 0 n/a 00:08:21.703 00:08:21.703 Elapsed time = 0.000 seconds 00:08:21.703 16:25:53 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:08:21.703 00:08:21.703 00:08:21.703 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.703 http://cunit.sourceforge.net/ 00:08:21.703 00:08:21.703 00:08:21.703 Suite: nvme_io_msg 00:08:21.703 Test: test_nvme_io_msg_send ...passed 00:08:21.703 Test: test_nvme_io_msg_process ...passed 00:08:21.703 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:08:21.703 00:08:21.703 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.703 suites 1 1 n/a 0 0 00:08:21.703 tests 3 3 3 0 0 00:08:21.703 asserts 56 56 56 0 n/a 00:08:21.703 00:08:21.703 Elapsed time = 0.000 seconds 00:08:21.703 16:25:53 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:08:21.703 00:08:21.703 00:08:21.703 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.703 http://cunit.sourceforge.net/ 00:08:21.703 00:08:21.703 00:08:21.703 Suite: nvme_pcie_common 00:08:21.703 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-13 16:25:53.137189] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:08:21.703 passed 00:08:21.703 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:08:21.703 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:08:21.703 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-13 16:25:53.138325] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:08:21.703 passed 00:08:21.703 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-13 16:25:53.138533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:08:21.703 [2024-07-13 16:25:53.138593] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:08:21.703 passed 00:08:21.703 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-13 16:25:53.139143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:21.703 [2024-07-13 16:25:53.139210] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:21.703 passed 00:08:21.703 00:08:21.703 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.703 suites 1 1 n/a 0 0 00:08:21.703 tests 6 6 6 0 0 00:08:21.703 asserts 148 148 148 0 n/a 00:08:21.703 00:08:21.703 Elapsed time = 0.002 seconds 00:08:21.703 16:25:53 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:08:21.961 00:08:21.961 00:08:21.961 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.961 http://cunit.sourceforge.net/ 00:08:21.961 00:08:21.961 00:08:21.961 Suite: nvme_fabric 00:08:21.961 Test: test_nvme_fabric_prop_set_cmd ...passed 00:08:21.961 Test: test_nvme_fabric_prop_get_cmd ...passed 00:08:21.961 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:08:21.961 Test: test_nvme_fabric_discover_probe ...passed 00:08:21.961 Test: test_nvme_fabric_qpair_connect ...[2024-07-13 16:25:53.182816] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:08:21.961 passed 00:08:21.961 00:08:21.961 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.961 suites 1 1 n/a 0 0 00:08:21.961 tests 5 5 5 0 0 00:08:21.961 asserts 60 60 60 0 n/a 00:08:21.961 00:08:21.961 Elapsed time = 0.001 seconds 00:08:21.961 16:25:53 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:08:21.961 00:08:21.961 00:08:21.961 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.961 http://cunit.sourceforge.net/ 00:08:21.961 00:08:21.961 00:08:21.961 Suite: nvme_opal 00:08:21.961 Test: test_opal_nvme_security_recv_send_done ...passed 00:08:21.961 Test: test_opal_add_short_atom_header ...passed 00:08:21.961 00:08:21.961 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.961 suites 1 1 n/a 0 0 00:08:21.961 tests 2 2 2 0 0 00:08:21.961 asserts 22 22 22 0 n/a 00:08:21.961 00:08:21.961 Elapsed time = 0.000 seconds 00:08:21.961 [2024-07-13 16:25:53.222203] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:08:21.961 00:08:21.961 real 0m1.440s 00:08:21.961 user 0m0.704s 00:08:21.961 sys 0m0.596s 00:08:21.961 16:25:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.961 ************************************ 00:08:21.961 END TEST unittest_nvme 00:08:21.961 ************************************ 00:08:21.961 16:25:53 -- common/autotest_common.sh@10 -- # set +x 00:08:21.961 16:25:53 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:21.961 16:25:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:21.961 16:25:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:21.961 16:25:53 -- common/autotest_common.sh@10 -- # set +x 00:08:21.961 ************************************ 00:08:21.961 START TEST unittest_log 00:08:21.961 ************************************ 00:08:21.961 16:25:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:21.961 00:08:21.961 00:08:21.962 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.962 http://cunit.sourceforge.net/ 00:08:21.962 00:08:21.962 00:08:21.962 Suite: log 00:08:21.962 Test: log_test ...[2024-07-13 16:25:53.331218] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:08:21.962 [2024-07-13 16:25:53.331584] log_ut.c: 55:log_test: *DEBUG*: log test 00:08:21.962 log dump test: 00:08:21.962 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:08:21.962 passed 00:08:21.962 Test: deprecation ...spdk dump test: 00:08:21.962 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:08:21.962 spdk dump test: 00:08:21.962 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:08:21.962 00000010 65 20 63 68 61 72 73 e chars 00:08:22.895 passed 00:08:22.895 00:08:22.895 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.895 suites 1 1 n/a 0 0 00:08:22.895 tests 2 2 2 0 0 00:08:22.895 asserts 73 73 73 0 n/a 00:08:22.895 00:08:22.895 Elapsed time = 0.001 seconds 00:08:22.895 00:08:22.895 real 0m1.044s 00:08:22.895 user 0m0.032s 00:08:22.895 sys 0m0.013s 00:08:22.895 16:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.895 16:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:22.895 ************************************ 00:08:22.895 END TEST unittest_log 00:08:22.895 ************************************ 00:08:23.154 16:25:54 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:23.154 16:25:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:23.154 16:25:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.154 16:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.154 ************************************ 00:08:23.154 START TEST unittest_lvol 00:08:23.154 ************************************ 00:08:23.154 16:25:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:23.154 00:08:23.154 00:08:23.154 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.154 http://cunit.sourceforge.net/ 00:08:23.154 00:08:23.154 00:08:23.154 Suite: lvol 00:08:23.154 Test: lvs_init_unload_success ...[2024-07-13 16:25:54.456628] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:08:23.154 passed 00:08:23.154 Test: lvs_init_destroy_success ...[2024-07-13 16:25:54.457340] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:08:23.154 passed 00:08:23.154 Test: lvs_init_opts_success ...passed 00:08:23.154 Test: lvs_unload_lvs_is_null_fail ...[2024-07-13 16:25:54.457638] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:08:23.154 passed 00:08:23.154 Test: lvs_names ...[2024-07-13 16:25:54.457711] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:08:23.155 [2024-07-13 16:25:54.457778] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:08:23.155 [2024-07-13 16:25:54.457970] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:08:23.155 passed 00:08:23.155 Test: lvol_create_destroy_success ...passed 00:08:23.155 Test: lvol_create_fail ...[2024-07-13 16:25:54.458685] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:08:23.155 [2024-07-13 16:25:54.458840] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:08:23.155 passed 00:08:23.155 Test: lvol_destroy_fail ...[2024-07-13 16:25:54.459214] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:08:23.155 passed 00:08:23.155 Test: lvol_close ...[2024-07-13 16:25:54.459492] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:08:23.155 [2024-07-13 16:25:54.459558] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:08:23.155 passed 00:08:23.155 Test: lvol_resize ...passed 00:08:23.155 Test: lvol_set_read_only ...passed 00:08:23.155 Test: test_lvs_load ...[2024-07-13 16:25:54.460574] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:08:23.155 passed 00:08:23.155 Test: lvols_load ...[2024-07-13 16:25:54.460638] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:08:23.155 [2024-07-13 16:25:54.460882] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:23.155 [2024-07-13 16:25:54.461052] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:23.155 passed 00:08:23.155 Test: lvol_open ...passed 00:08:23.155 Test: lvol_snapshot ...passed 00:08:23.155 Test: lvol_snapshot_fail ...[2024-07-13 16:25:54.461888] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:08:23.155 passed 00:08:23.155 Test: lvol_clone ...passed 00:08:23.155 Test: lvol_clone_fail ...[2024-07-13 16:25:54.462591] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:08:23.155 passed 00:08:23.155 Test: lvol_iter_clones ...passed 00:08:23.155 Test: lvol_refcnt ...[2024-07-13 16:25:54.463186] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol d5c9992b-f3b8-40e0-909d-98f6bbff88d1 because it is still open 00:08:23.155 passed 00:08:23.155 Test: lvol_names ...[2024-07-13 16:25:54.463467] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:23.155 [2024-07-13 16:25:54.463589] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:23.155 [2024-07-13 16:25:54.463839] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:08:23.155 passed 00:08:23.155 Test: lvol_create_thin_provisioned ...passed 00:08:23.155 Test: lvol_rename ...[2024-07-13 16:25:54.464373] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:23.155 [2024-07-13 16:25:54.464486] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:08:23.155 passed 00:08:23.155 Test: lvs_rename ...[2024-07-13 16:25:54.464741] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:08:23.155 passed 00:08:23.155 Test: lvol_inflate ...[2024-07-13 16:25:54.465044] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:23.155 passed 00:08:23.155 Test: lvol_decouple_parent ...[2024-07-13 16:25:54.465312] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:23.155 passed 00:08:23.155 Test: lvol_get_xattr ...passed 00:08:23.155 Test: lvol_esnap_reload ...passed 00:08:23.155 Test: lvol_esnap_create_bad_args ...[2024-07-13 16:25:54.465843] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:08:23.155 [2024-07-13 16:25:54.465892] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:23.155 [2024-07-13 16:25:54.465968] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:08:23.155 [2024-07-13 16:25:54.466119] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:23.155 passed 00:08:23.155 Test: lvol_esnap_create_delete ...[2024-07-13 16:25:54.466281] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:08:23.155 passed 00:08:23.155 Test: lvol_esnap_load_esnaps ...[2024-07-13 16:25:54.466652] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:08:23.155 passed 00:08:23.155 Test: lvol_esnap_missing ...[2024-07-13 16:25:54.466822] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:23.155 [2024-07-13 16:25:54.466882] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:23.155 passed 00:08:23.155 Test: lvol_esnap_hotplug ... 00:08:23.155 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:08:23.155 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:08:23.155 [2024-07-13 16:25:54.467720] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol d6cd746f-a887-4962-aa69-5cf6728daa84: failed to create esnap bs_dev: error -12 00:08:23.155 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:08:23.155 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:08:23.155 [2024-07-13 16:25:54.467924] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 241a1740-d450-40ea-8e50-70b3fc3e25be: failed to create esnap bs_dev: error -12 00:08:23.155 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:08:23.155 [2024-07-13 16:25:54.468052] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol d8cd4685-1895-4353-baee-2049245fb057: failed to create esnap bs_dev: error -12 00:08:23.155 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:08:23.155 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:08:23.155 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:08:23.155 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:08:23.155 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:08:23.155 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:08:23.155 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:08:23.155 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:08:23.155 passed 00:08:23.155 Test: lvol_get_by ...passed 00:08:23.155 00:08:23.155 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.155 suites 1 1 n/a 0 0 00:08:23.155 tests 34 34 34 0 0 00:08:23.155 asserts 1439 1439 1439 0 n/a 00:08:23.155 00:08:23.155 Elapsed time = 0.013 seconds 00:08:23.155 00:08:23.155 real 0m0.069s 00:08:23.155 user 0m0.031s 00:08:23.155 sys 0m0.039s 00:08:23.155 16:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.155 16:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.155 ************************************ 00:08:23.155 END TEST unittest_lvol 00:08:23.155 ************************************ 00:08:23.155 16:25:54 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:23.155 16:25:54 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:23.155 16:25:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:23.155 16:25:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.155 16:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.155 ************************************ 00:08:23.155 START TEST unittest_nvme_rdma 00:08:23.155 ************************************ 00:08:23.155 16:25:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:23.155 00:08:23.155 00:08:23.155 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.155 http://cunit.sourceforge.net/ 00:08:23.155 00:08:23.155 00:08:23.155 Suite: nvme_rdma 00:08:23.155 Test: test_nvme_rdma_build_sgl_request ...[2024-07-13 16:25:54.596872] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:08:23.155 [2024-07-13 16:25:54.597306] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:23.155 passed 00:08:23.155 Test: test_nvme_rdma_build_sgl_inline_request ...[2024-07-13 16:25:54.597436] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:08:23.155 passed 00:08:23.155 Test: test_nvme_rdma_build_contig_request ...passed 00:08:23.155 Test: test_nvme_rdma_build_contig_inline_request ...passed[2024-07-13 16:25:54.597525] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:23.155 00:08:23.155 Test: test_nvme_rdma_create_reqs ...[2024-07-13 16:25:54.597663] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:08:23.155 passed 00:08:23.155 Test: test_nvme_rdma_create_rsps ...[2024-07-13 16:25:54.598104] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:08:23.155 passed 00:08:23.155 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-13 16:25:54.598339] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:23.155 passed 00:08:23.155 Test: test_nvme_rdma_poller_create ...[2024-07-13 16:25:54.598419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:23.155 passed 00:08:23.155 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-07-13 16:25:54.598637] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:08:23.155 passed 00:08:23.155 Test: test_nvme_rdma_ctrlr_construct ...passed 00:08:23.155 Test: test_nvme_rdma_req_put_and_get ...passed 00:08:23.155 Test: test_nvme_rdma_req_init ...passed 00:08:23.155 Test: test_nvme_rdma_validate_cm_event ...passed 00:08:23.155 Test: test_nvme_rdma_qpair_init ...[2024-07-13 16:25:54.598994] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:08:23.155 [2024-07-13 16:25:54.599053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:08:23.155 passed 00:08:23.155 Test: test_nvme_rdma_qpair_submit_request ...passed 00:08:23.155 Test: test_nvme_rdma_memory_domain ...[2024-07-13 16:25:54.599282] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:08:23.155 passed 00:08:23.155 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:08:23.155 Test: test_rdma_get_memory_translation ...[2024-07-13 16:25:54.599398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:08:23.156 [2024-07-13 16:25:54.599475] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:08:23.156 passed 00:08:23.156 Test: test_get_rdma_qpair_from_wc ...passed 00:08:23.156 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:08:23.156 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-13 16:25:54.599602] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:23.156 [2024-07-13 16:25:54.599674] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:23.156 passed 00:08:23.156 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-13 16:25:54.599804] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:23.156 [2024-07-13 16:25:54.599879] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:08:23.156 [2024-07-13 16:25:54.599929] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffc433b6360 on poll group 0x60b0000001a0 00:08:23.156 [2024-07-13 16:25:54.600008] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:23.156 [2024-07-13 16:25:54.600073] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:08:23.156 [2024-07-13 16:25:54.600120] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffc433b6360 on poll group 0x60b0000001a0 00:08:23.156 [2024-07-13 16:25:54.600231] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:23.156 passed 00:08:23.156 00:08:23.156 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.156 suites 1 1 n/a 0 0 00:08:23.156 tests 22 22 22 0 0 00:08:23.156 asserts 412 412 412 0 n/a 00:08:23.156 00:08:23.156 Elapsed time = 0.004 seconds 00:08:23.156 00:08:23.156 real 0m0.041s 00:08:23.156 user 0m0.019s 00:08:23.156 sys 0m0.022s 00:08:23.156 16:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.156 16:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.156 ************************************ 00:08:23.156 END TEST unittest_nvme_rdma 00:08:23.156 ************************************ 00:08:23.414 16:25:54 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:23.414 16:25:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:23.414 16:25:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.414 16:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.414 ************************************ 00:08:23.414 START TEST unittest_nvmf_transport 00:08:23.414 ************************************ 00:08:23.414 16:25:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:23.414 00:08:23.414 00:08:23.414 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.414 http://cunit.sourceforge.net/ 00:08:23.414 00:08:23.414 00:08:23.414 Suite: nvmf 00:08:23.414 Test: test_spdk_nvmf_transport_create ...[2024-07-13 16:25:54.709286] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:08:23.414 [2024-07-13 16:25:54.709709] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:08:23.414 [2024-07-13 16:25:54.709792] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:08:23.414 [2024-07-13 16:25:54.709957] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:08:23.414 passed 00:08:23.414 Test: test_nvmf_transport_poll_group_create ...passed 00:08:23.414 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-13 16:25:54.710304] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:08:23.414 passed 00:08:23.414 Test: test_spdk_nvmf_transport_listen_ext ...[2024-07-13 16:25:54.710419] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:08:23.414 [2024-07-13 16:25:54.710464] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:08:23.414 passed 00:08:23.414 00:08:23.414 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.414 suites 1 1 n/a 0 0 00:08:23.414 tests 4 4 4 0 0 00:08:23.414 asserts 49 49 49 0 n/a 00:08:23.414 00:08:23.414 Elapsed time = 0.001 seconds 00:08:23.414 00:08:23.414 real 0m0.055s 00:08:23.414 user 0m0.030s 00:08:23.414 sys 0m0.026s 00:08:23.414 16:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.414 16:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.414 ************************************ 00:08:23.414 END TEST unittest_nvmf_transport 00:08:23.414 ************************************ 00:08:23.414 16:25:54 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:23.414 16:25:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:23.414 16:25:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.414 16:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.414 ************************************ 00:08:23.414 START TEST unittest_rdma 00:08:23.414 ************************************ 00:08:23.414 16:25:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:23.414 00:08:23.414 00:08:23.414 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.414 http://cunit.sourceforge.net/ 00:08:23.414 00:08:23.414 00:08:23.414 Suite: rdma_common 00:08:23.414 Test: test_spdk_rdma_pd ...[2024-07-13 16:25:54.819698] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:23.414 passed 00:08:23.414 00:08:23.414 [2024-07-13 16:25:54.820136] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:23.414 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.414 suites 1 1 n/a 0 0 00:08:23.414 tests 1 1 1 0 0 00:08:23.414 asserts 31 31 31 0 n/a 00:08:23.414 00:08:23.414 Elapsed time = 0.001 seconds 00:08:23.414 00:08:23.414 real 0m0.036s 00:08:23.414 user 0m0.013s 00:08:23.414 sys 0m0.024s 00:08:23.414 16:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.414 16:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.414 ************************************ 00:08:23.414 END TEST unittest_rdma 00:08:23.414 ************************************ 00:08:23.673 16:25:54 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:23.673 16:25:54 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:23.673 16:25:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:23.673 16:25:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.673 16:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.673 ************************************ 00:08:23.673 START TEST unittest_nvme_cuse 00:08:23.673 ************************************ 00:08:23.673 16:25:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:23.673 00:08:23.673 00:08:23.673 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.673 http://cunit.sourceforge.net/ 00:08:23.673 00:08:23.673 00:08:23.673 Suite: nvme_cuse 00:08:23.673 Test: test_cuse_nvme_submit_io_read_write ...passed 00:08:23.673 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:08:23.673 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:08:23.673 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:08:23.673 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:08:23.673 Test: test_cuse_nvme_submit_io ...[2024-07-13 16:25:54.933353] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:08:23.673 passed 00:08:23.673 Test: test_cuse_nvme_reset ...[2024-07-13 16:25:54.934303] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:08:23.673 passed 00:08:23.673 Test: test_nvme_cuse_stop ...passed 00:08:23.673 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:08:23.673 00:08:23.673 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.673 suites 1 1 n/a 0 0 00:08:23.673 tests 9 9 9 0 0 00:08:23.673 asserts 121 121 121 0 n/a 00:08:23.673 00:08:23.673 Elapsed time = 0.002 seconds 00:08:23.673 00:08:23.673 real 0m0.047s 00:08:23.673 user 0m0.027s 00:08:23.673 sys 0m0.020s 00:08:23.673 16:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.673 16:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:23.673 ************************************ 00:08:23.673 END TEST unittest_nvme_cuse 00:08:23.673 ************************************ 00:08:23.673 16:25:55 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:08:23.673 16:25:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:23.673 16:25:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.673 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:08:23.673 ************************************ 00:08:23.673 START TEST unittest_nvmf 00:08:23.673 ************************************ 00:08:23.673 16:25:55 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:08:23.673 16:25:55 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:08:23.673 00:08:23.673 00:08:23.673 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.673 http://cunit.sourceforge.net/ 00:08:23.673 00:08:23.673 00:08:23.673 Suite: nvmf 00:08:23.673 Test: test_get_log_page ...passed 00:08:23.673 Test: test_process_fabrics_cmd ...[2024-07-13 16:25:55.054784] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:08:23.673 passed 00:08:23.673 Test: test_connect ...[2024-07-13 16:25:55.055730] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:08:23.674 [2024-07-13 16:25:55.055859] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:08:23.674 [2024-07-13 16:25:55.055925] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:08:23.674 [2024-07-13 16:25:55.055978] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:08:23.674 [2024-07-13 16:25:55.056079] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:08:23.674 [2024-07-13 16:25:55.056125] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:08:23.674 [2024-07-13 16:25:55.056245] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:08:23.674 [2024-07-13 16:25:55.056318] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:08:23.674 [2024-07-13 16:25:55.056435] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:08:23.674 [2024-07-13 16:25:55.056521] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:08:23.674 [2024-07-13 16:25:55.056793] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:08:23.674 [2024-07-13 16:25:55.056881] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:08:23.674 [2024-07-13 16:25:55.056988] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:08:23.674 [2024-07-13 16:25:55.057068] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:08:23.674 [2024-07-13 16:25:55.057192] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:08:23.674 [2024-07-13 16:25:55.057346] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:08:23.674 passed 00:08:23.674 Test: test_get_ns_id_desc_list ...passed 00:08:23.674 Test: test_identify_ns ...[2024-07-13 16:25:55.057604] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:23.674 [2024-07-13 16:25:55.057824] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:08:23.674 [2024-07-13 16:25:55.057979] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:08:23.674 passed 00:08:23.674 Test: test_identify_ns_iocs_specific ...[2024-07-13 16:25:55.058126] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:23.674 [2024-07-13 16:25:55.058424] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:23.674 passed 00:08:23.674 Test: test_reservation_write_exclusive ...passed 00:08:23.674 Test: test_reservation_exclusive_access ...passed 00:08:23.674 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:08:23.674 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:08:23.674 Test: test_reservation_notification_log_page ...passed 00:08:23.674 Test: test_get_dif_ctx ...passed 00:08:23.674 Test: test_set_get_features ...[2024-07-13 16:25:55.059055] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:23.674 [2024-07-13 16:25:55.059106] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:23.674 [2024-07-13 16:25:55.059160] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:08:23.674 [2024-07-13 16:25:55.059226] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:08:23.674 passed 00:08:23.674 Test: test_identify_ctrlr ...passed 00:08:23.674 Test: test_identify_ctrlr_iocs_specific ...passed 00:08:23.674 Test: test_custom_admin_cmd ...passed 00:08:23.674 Test: test_fused_compare_and_write ...[2024-07-13 16:25:55.059714] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:08:23.674 [2024-07-13 16:25:55.059770] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:23.674 [2024-07-13 16:25:55.059819] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:23.674 passed 00:08:23.674 Test: test_multi_async_event_reqs ...passed 00:08:23.674 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:08:23.674 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:08:23.674 Test: test_multi_async_events ...passed 00:08:23.674 Test: test_rae ...passed 00:08:23.674 Test: test_nvmf_ctrlr_create_destruct ...passed 00:08:23.674 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:08:23.674 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:08:23.674 Test: test_zcopy_read ...passed 00:08:23.674 Test: test_zcopy_write ...passed 00:08:23.674 Test: test_nvmf_property_set ...passed 00:08:23.674 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-13 16:25:55.060352] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:08:23.674 [2024-07-13 16:25:55.060512] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:23.674 [2024-07-13 16:25:55.060591] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:23.674 passed 00:08:23.674 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:08:23.674 00:08:23.674 [2024-07-13 16:25:55.060646] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:08:23.674 [2024-07-13 16:25:55.060698] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:08:23.674 [2024-07-13 16:25:55.060741] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:23.674 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.674 suites 1 1 n/a 0 0 00:08:23.674 tests 30 30 30 0 0 00:08:23.674 asserts 885 885 885 0 n/a 00:08:23.674 00:08:23.674 Elapsed time = 0.006 seconds 00:08:23.674 16:25:55 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:08:23.674 00:08:23.674 00:08:23.674 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.674 http://cunit.sourceforge.net/ 00:08:23.674 00:08:23.674 00:08:23.674 Suite: nvmf 00:08:23.674 Test: test_get_rw_params ...passed 00:08:23.674 Test: test_lba_in_range ...passed 00:08:23.674 Test: test_get_dif_ctx ...passed 00:08:23.674 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:08:23.674 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-13 16:25:55.095054] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:08:23.674 [2024-07-13 16:25:55.095307] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:08:23.674 passed 00:08:23.674 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-13 16:25:55.095388] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:08:23.674 [2024-07-13 16:25:55.095440] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:08:23.674 [2024-07-13 16:25:55.095515] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:08:23.674 passed 00:08:23.674 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-13 16:25:55.095608] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:08:23.674 passed 00:08:23.674 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:08:23.674 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:08:23.674 00:08:23.674 [2024-07-13 16:25:55.095646] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:08:23.674 [2024-07-13 16:25:55.095708] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:08:23.674 [2024-07-13 16:25:55.095740] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:08:23.674 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.674 suites 1 1 n/a 0 0 00:08:23.674 tests 9 9 9 0 0 00:08:23.674 asserts 157 157 157 0 n/a 00:08:23.674 00:08:23.674 Elapsed time = 0.001 seconds 00:08:23.674 16:25:55 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:08:23.674 00:08:23.674 00:08:23.674 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.674 http://cunit.sourceforge.net/ 00:08:23.674 00:08:23.674 00:08:23.674 Suite: nvmf 00:08:23.674 Test: test_discovery_log ...passed 00:08:23.674 Test: test_discovery_log_with_filters ...passed 00:08:23.674 00:08:23.674 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.674 suites 1 1 n/a 0 0 00:08:23.674 tests 2 2 2 0 0 00:08:23.674 asserts 238 238 238 0 n/a 00:08:23.674 00:08:23.674 Elapsed time = 0.003 seconds 00:08:23.934 16:25:55 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:08:23.934 00:08:23.934 00:08:23.934 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.934 http://cunit.sourceforge.net/ 00:08:23.934 00:08:23.934 00:08:23.934 Suite: nvmf 00:08:23.934 Test: nvmf_test_create_subsystem ...[2024-07-13 16:25:55.187329] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:08:23.934 [2024-07-13 16:25:55.187902] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:08:23.934 [2024-07-13 16:25:55.188075] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:08:23.934 [2024-07-13 16:25:55.188160] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:08:23.934 [2024-07-13 16:25:55.188247] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:08:23.934 [2024-07-13 16:25:55.188333] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:08:23.934 [2024-07-13 16:25:55.188520] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:08:23.934 [2024-07-13 16:25:55.188803] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:08:23.934 [2024-07-13 16:25:55.188947] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:08:23.934 [2024-07-13 16:25:55.189028] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:23.934 passed 00:08:23.934 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-13 16:25:55.189097] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:23.934 passed 00:08:23.934 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:08:23.934 Test: test_reservation_register ...[2024-07-13 16:25:55.189424] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:08:23.934 [2024-07-13 16:25:55.189604] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:08:23.934 [2024-07-13 16:25:55.190006] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:23.934 passed 00:08:23.934 Test: test_reservation_register_with_ptpl ...[2024-07-13 16:25:55.190203] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:08:23.934 passed 00:08:23.934 Test: test_reservation_acquire_preempt_1 ...passed 00:08:23.934 Test: test_reservation_acquire_release_with_ptpl ...[2024-07-13 16:25:55.191603] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:23.934 passed 00:08:23.934 Test: test_reservation_release ...passed 00:08:23.934 Test: test_reservation_unregister_notification ...[2024-07-13 16:25:55.193776] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:23.934 [2024-07-13 16:25:55.194043] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:23.934 passed 00:08:23.934 Test: test_reservation_release_notification ...passed 00:08:23.934 Test: test_reservation_release_notification_write_exclusive ...[2024-07-13 16:25:55.194311] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:23.934 [2024-07-13 16:25:55.194598] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:23.934 passed 00:08:23.934 Test: test_reservation_clear_notification ...passed 00:08:23.934 Test: test_reservation_preempt_notification ...[2024-07-13 16:25:55.194887] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:23.934 [2024-07-13 16:25:55.195155] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:23.934 passed 00:08:23.934 Test: test_spdk_nvmf_ns_event ...passed 00:08:23.934 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:08:23.934 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:08:23.934 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:08:23.934 Test: test_nvmf_ns_reservation_report ...passed 00:08:23.934 Test: test_nvmf_nqn_is_valid ...[2024-07-13 16:25:55.195912] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:08:23.934 [2024-07-13 16:25:55.196008] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:08:23.934 [2024-07-13 16:25:55.196143] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:08:23.934 [2024-07-13 16:25:55.196216] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:08:23.934 [2024-07-13 16:25:55.196305] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ea2c4798-fa12-461e-80d3-5367c8fab8c": uuid is not the correct length 00:08:23.934 [2024-07-13 16:25:55.196347] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:08:23.934 passed 00:08:23.934 Test: test_nvmf_ns_reservation_restore ...passed 00:08:23.934 Test: test_nvmf_subsystem_state_change ...[2024-07-13 16:25:55.196487] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:08:23.934 passed 00:08:23.934 Test: test_nvmf_reservation_custom_ops ...passed 00:08:23.934 00:08:23.934 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.934 suites 1 1 n/a 0 0 00:08:23.935 tests 22 22 22 0 0 00:08:23.935 asserts 407 407 407 0 n/a 00:08:23.935 00:08:23.935 Elapsed time = 0.011 seconds 00:08:23.935 16:25:55 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:08:23.935 00:08:23.935 00:08:23.935 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.935 http://cunit.sourceforge.net/ 00:08:23.935 00:08:23.935 00:08:23.935 Suite: nvmf 00:08:23.935 Test: test_nvmf_tcp_create ...[2024-07-13 16:25:55.277662] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:23.935 passed 00:08:23.935 Test: test_nvmf_tcp_destroy ...passed 00:08:23.935 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:23.935 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:23.935 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:23.935 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:23.935 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:23.935 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-13 16:25:55.402786] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:23.935 [2024-07-13 16:25:55.402933] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e5b90 is same with the state(5) to be set 00:08:23.935 [2024-07-13 16:25:55.403084] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e5b90 is same with the state(5) to be set 00:08:23.935 [2024-07-13 16:25:55.403332] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:23.935 [2024-07-13 16:25:55.403423] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e5b90 is same with the state(5) to be set 00:08:23.935 passed 00:08:23.935 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:08:23.935 Test: test_nvmf_tcp_icreq_handle ...[2024-07-13 16:25:55.403975] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:23.935 [2024-07-13 16:25:55.404148] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:23.935 [2024-07-13 16:25:55.404378] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e5b90 is same with the state(5) to be set 00:08:23.935 [2024-07-13 16:25:55.404472] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:24.194 [2024-07-13 16:25:55.404728] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e5b90 is same with the state(5) to be set 00:08:24.194 [2024-07-13 16:25:55.404812] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:24.194 [2024-07-13 16:25:55.404926] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e5b90 is same with the state(5) to be set 00:08:24.194 [2024-07-13 16:25:55.405072] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:24.194 [2024-07-13 16:25:55.405192] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e5b90 is same with the state(5) to be set 00:08:24.194 passed 00:08:24.194 Test: test_nvmf_tcp_check_xfer_type ...passed 00:08:24.194 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-13 16:25:55.405614] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:24.194 [2024-07-13 16:25:55.405726] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:24.194 [2024-07-13 16:25:55.405851] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e5b90 is same with the state(5) to be set 00:08:24.194 passed 00:08:24.194 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-13 16:25:55.406117] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffd4a6e68f0 00:08:24.194 [2024-07-13 16:25:55.406328] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:24.194 [2024-07-13 16:25:55.406499] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e6050 is same with the state(5) to be set 00:08:24.194 [2024-07-13 16:25:55.406654] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffd4a6e6050 00:08:24.194 [2024-07-13 16:25:55.406834] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:24.194 [2024-07-13 16:25:55.406927] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e6050 is same with the state(5) to be set 00:08:24.194 [2024-07-13 16:25:55.407147] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:24.194 [2024-07-13 16:25:55.407245] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:24.194 [2024-07-13 16:25:55.407432] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e6050 is same with the state(5) to be set 00:08:24.194 [2024-07-13 16:25:55.407589] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:24.194 [2024-07-13 16:25:55.407728] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:24.194 [2024-07-13 16:25:55.407864] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e6050 is same with the state(5) to be set 00:08:24.194 [2024-07-13 16:25:55.408009] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:24.194 [2024-07-13 16:25:55.408171] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e6050 is same with the state(5) to be set 00:08:24.194 [2024-07-13 16:25:55.408365] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:24.194 [2024-07-13 16:25:55.408451] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e6050 is same with the state(5) to be set 00:08:24.194 [2024-07-13 16:25:55.408676] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:24.194 [2024-07-13 16:25:55.408758] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e6050 is same with the state(5) to be set 00:08:24.194 [2024-07-13 16:25:55.408986] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:24.194 [2024-07-13 16:25:55.409070] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e6050 is same with the state(5) to be set 00:08:24.194 [2024-07-13 16:25:55.409201] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:24.194 [2024-07-13 16:25:55.409286] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e6050 is same with the state(5) to be set 00:08:24.194 [2024-07-13 16:25:55.409443] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:24.194 [2024-07-13 16:25:55.409522] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4a6e6050 is same with the state(5) to be set 00:08:24.194 passed 00:08:24.194 Test: test_nvmf_tcp_tls_add_remove_credentials ...passed 00:08:24.194 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-13 16:25:55.437431] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:24.194 [2024-07-13 16:25:55.437605] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:24.194 passed 00:08:24.194 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-13 16:25:55.438358] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:24.195 [2024-07-13 16:25:55.438564] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:24.195 passed 00:08:24.195 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-13 16:25:55.439089] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:24.195 [2024-07-13 16:25:55.439266] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:24.195 passed 00:08:24.195 00:08:24.195 Run Summary: Type Total Ran Passed Failed Inactive 00:08:24.195 suites 1 1 n/a 0 0 00:08:24.195 tests 17 17 17 0 0 00:08:24.195 asserts 222 222 222 0 n/a 00:08:24.195 00:08:24.195 Elapsed time = 0.184 seconds 00:08:24.195 16:25:55 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:24.195 00:08:24.195 00:08:24.195 CUnit - A unit testing framework for C - Version 2.1-3 00:08:24.195 http://cunit.sourceforge.net/ 00:08:24.195 00:08:24.195 00:08:24.195 Suite: nvmf 00:08:24.195 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:24.195 00:08:24.195 Run Summary: Type Total Ran Passed Failed Inactive 00:08:24.195 suites 1 1 n/a 0 0 00:08:24.195 tests 1 1 1 0 0 00:08:24.195 asserts 17 17 17 0 n/a 00:08:24.195 00:08:24.195 Elapsed time = 0.024 seconds 00:08:24.195 ************************************ 00:08:24.195 END TEST unittest_nvmf 00:08:24.195 ************************************ 00:08:24.195 00:08:24.195 real 0m0.627s 00:08:24.195 user 0m0.258s 00:08:24.195 sys 0m0.362s 00:08:24.195 16:25:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.195 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:08:24.454 16:25:55 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:24.454 16:25:55 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:24.454 16:25:55 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:24.454 16:25:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:24.454 16:25:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.454 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:08:24.454 ************************************ 00:08:24.454 START TEST unittest_nvmf_rdma 00:08:24.454 ************************************ 00:08:24.454 16:25:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:24.454 00:08:24.454 00:08:24.454 CUnit - A unit testing framework for C - Version 2.1-3 00:08:24.454 http://cunit.sourceforge.net/ 00:08:24.454 00:08:24.454 00:08:24.454 Suite: nvmf 00:08:24.454 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-13 16:25:55.772862] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:24.454 [2024-07-13 16:25:55.773403] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:24.454 [2024-07-13 16:25:55.773583] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:24.454 passed 00:08:24.454 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:24.454 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:24.454 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:24.454 Test: test_nvmf_rdma_opts_init ...passed 00:08:24.454 Test: test_nvmf_rdma_request_free_data ...passed 00:08:24.454 Test: test_nvmf_rdma_update_ibv_state ...[2024-07-13 16:25:55.775946] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:08:24.454 [2024-07-13 16:25:55.776206] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:08:24.454 passed 00:08:24.454 Test: test_nvmf_rdma_resources_create ...passed 00:08:24.454 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:24.454 Test: test_nvmf_rdma_resize_cq ...[2024-07-13 16:25:55.778340] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:24.454 Using CQ of insufficient size may lead to CQ overrun 00:08:24.454 [2024-07-13 16:25:55.778626] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:24.454 [2024-07-13 16:25:55.778788] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:24.454 passed 00:08:24.454 00:08:24.454 Run Summary: Type Total Ran Passed Failed Inactive 00:08:24.454 suites 1 1 n/a 0 0 00:08:24.454 tests 10 10 10 0 0 00:08:24.454 asserts 584 584 584 0 n/a 00:08:24.454 00:08:24.454 Elapsed time = 0.004 seconds 00:08:24.454 00:08:24.454 real 0m0.053s 00:08:24.454 user 0m0.019s 00:08:24.454 sys 0m0.032s 00:08:24.454 16:25:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.454 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:08:24.454 ************************************ 00:08:24.454 END TEST unittest_nvmf_rdma 00:08:24.454 ************************************ 00:08:24.454 16:25:55 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:24.454 16:25:55 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:08:24.454 16:25:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:24.454 16:25:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.454 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:08:24.454 ************************************ 00:08:24.454 START TEST unittest_scsi 00:08:24.454 ************************************ 00:08:24.454 16:25:55 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:08:24.454 16:25:55 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:24.454 00:08:24.454 00:08:24.454 CUnit - A unit testing framework for C - Version 2.1-3 00:08:24.454 http://cunit.sourceforge.net/ 00:08:24.454 00:08:24.454 00:08:24.454 Suite: dev_suite 00:08:24.454 Test: dev_destruct_null_dev ...passed 00:08:24.454 Test: dev_destruct_zero_luns ...passed 00:08:24.454 Test: dev_destruct_null_lun ...passed 00:08:24.454 Test: dev_destruct_success ...passed 00:08:24.454 Test: dev_construct_num_luns_zero ...[2024-07-13 16:25:55.899029] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:24.454 passed 00:08:24.454 Test: dev_construct_no_lun_zero ...[2024-07-13 16:25:55.900439] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:24.454 passed 00:08:24.454 Test: dev_construct_null_lun ...[2024-07-13 16:25:55.900967] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:24.455 passed 00:08:24.455 Test: dev_construct_name_too_long ...[2024-07-13 16:25:55.901286] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:24.455 passed 00:08:24.455 Test: dev_construct_success ...passed 00:08:24.455 Test: dev_construct_success_lun_zero_not_first ...passed 00:08:24.455 Test: dev_queue_mgmt_task_success ...passed 00:08:24.455 Test: dev_queue_task_success ...passed 00:08:24.455 Test: dev_stop_success ...passed 00:08:24.455 Test: dev_add_port_max_ports ...[2024-07-13 16:25:55.902826] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:24.455 passed 00:08:24.455 Test: dev_add_port_construct_failure1 ...[2024-07-13 16:25:55.903343] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:24.455 passed 00:08:24.455 Test: dev_add_port_construct_failure2 ...[2024-07-13 16:25:55.903709] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:24.455 passed 00:08:24.455 Test: dev_add_port_success1 ...passed 00:08:24.455 Test: dev_add_port_success2 ...passed 00:08:24.455 Test: dev_add_port_success3 ...passed 00:08:24.455 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:24.455 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:24.455 Test: dev_find_port_by_id_success ...passed 00:08:24.455 Test: dev_add_lun_bdev_not_found ...passed 00:08:24.455 Test: dev_add_lun_no_free_lun_id ...[2024-07-13 16:25:55.905711] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:24.455 passed 00:08:24.455 Test: dev_add_lun_success1 ...passed 00:08:24.455 Test: dev_add_lun_success2 ...passed 00:08:24.455 Test: dev_check_pending_tasks ...passed 00:08:24.455 Test: dev_iterate_luns ...passed 00:08:24.455 Test: dev_find_free_lun ...passed 00:08:24.455 00:08:24.455 Run Summary: Type Total Ran Passed Failed Inactive 00:08:24.455 suites 1 1 n/a 0 0 00:08:24.455 tests 29 29 29 0 0 00:08:24.455 asserts 97 97 97 0 n/a 00:08:24.455 00:08:24.455 Elapsed time = 0.003 seconds 00:08:24.714 16:25:55 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:24.714 00:08:24.714 00:08:24.714 CUnit - A unit testing framework for C - Version 2.1-3 00:08:24.714 http://cunit.sourceforge.net/ 00:08:24.714 00:08:24.714 00:08:24.714 Suite: lun_suite 00:08:24.714 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-13 16:25:55.956168] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:24.714 passed 00:08:24.714 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-13 16:25:55.956932] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:24.714 passed 00:08:24.714 Test: lun_task_mgmt_execute_lun_reset ...passed 00:08:24.714 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:24.714 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-13 16:25:55.957488] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:24.714 passed 00:08:24.714 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:08:24.714 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:08:24.714 Test: lun_append_task_null_lun_not_supported ...passed 00:08:24.714 Test: lun_execute_scsi_task_pending ...passed 00:08:24.714 Test: lun_execute_scsi_task_complete ...passed 00:08:24.714 Test: lun_execute_scsi_task_resize ...passed 00:08:24.714 Test: lun_destruct_success ...passed 00:08:24.714 Test: lun_construct_null_ctx ...[2024-07-13 16:25:55.958649] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:24.714 passed 00:08:24.714 Test: lun_construct_success ...passed 00:08:24.714 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:08:24.714 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:24.714 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:24.714 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:24.714 00:08:24.714 Run Summary: Type Total Ran Passed Failed Inactive 00:08:24.714 suites 1 1 n/a 0 0 00:08:24.714 tests 18 18 18 0 0 00:08:24.714 asserts 153 153 153 0 n/a 00:08:24.714 00:08:24.714 Elapsed time = 0.002 seconds 00:08:24.714 16:25:55 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:24.714 00:08:24.714 00:08:24.714 CUnit - A unit testing framework for C - Version 2.1-3 00:08:24.714 http://cunit.sourceforge.net/ 00:08:24.714 00:08:24.714 00:08:24.714 Suite: scsi_suite 00:08:24.714 Test: scsi_init ...passed 00:08:24.714 00:08:24.714 Run Summary: Type Total Ran Passed Failed Inactive 00:08:24.714 suites 1 1 n/a 0 0 00:08:24.714 tests 1 1 1 0 0 00:08:24.714 asserts 1 1 1 0 n/a 00:08:24.714 00:08:24.714 Elapsed time = 0.000 seconds 00:08:24.714 16:25:56 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:24.714 00:08:24.714 00:08:24.715 CUnit - A unit testing framework for C - Version 2.1-3 00:08:24.715 http://cunit.sourceforge.net/ 00:08:24.715 00:08:24.715 00:08:24.715 Suite: translation_suite 00:08:24.715 Test: mode_select_6_test ...passed 00:08:24.715 Test: mode_select_6_test2 ...passed 00:08:24.715 Test: mode_sense_6_test ...passed 00:08:24.715 Test: mode_sense_10_test ...passed 00:08:24.715 Test: inquiry_evpd_test ...passed 00:08:24.715 Test: inquiry_standard_test ...passed 00:08:24.715 Test: inquiry_overflow_test ...passed 00:08:24.715 Test: task_complete_test ...passed 00:08:24.715 Test: lba_range_test ...passed 00:08:24.715 Test: xfer_len_test ...[2024-07-13 16:25:56.030635] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:24.715 passed 00:08:24.715 Test: xfer_test ...passed 00:08:24.715 Test: scsi_name_padding_test ...passed 00:08:24.715 Test: get_dif_ctx_test ...passed 00:08:24.715 Test: unmap_split_test ...passed 00:08:24.715 00:08:24.715 Run Summary: Type Total Ran Passed Failed Inactive 00:08:24.715 suites 1 1 n/a 0 0 00:08:24.715 tests 14 14 14 0 0 00:08:24.715 asserts 1200 1200 1200 0 n/a 00:08:24.715 00:08:24.715 Elapsed time = 0.004 seconds 00:08:24.715 16:25:56 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:24.715 00:08:24.715 00:08:24.715 CUnit - A unit testing framework for C - Version 2.1-3 00:08:24.715 http://cunit.sourceforge.net/ 00:08:24.715 00:08:24.715 00:08:24.715 Suite: reservation_suite 00:08:24.715 Test: test_reservation_register ...[2024-07-13 16:25:56.074794] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:24.715 passed 00:08:24.715 Test: test_reservation_reserve ...[2024-07-13 16:25:56.075546] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:24.715 [2024-07-13 16:25:56.075756] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:24.715 [2024-07-13 16:25:56.075987] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:24.715 passed 00:08:24.715 Test: test_reservation_preempt_non_all_regs ...[2024-07-13 16:25:56.076213] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:24.715 [2024-07-13 16:25:56.076461] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:24.715 passed 00:08:24.715 Test: test_reservation_preempt_all_regs ...[2024-07-13 16:25:56.076876] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:24.715 passed 00:08:24.715 Test: test_reservation_cmds_conflict ...[2024-07-13 16:25:56.077303] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:24.715 [2024-07-13 16:25:56.077518] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:24.715 [2024-07-13 16:25:56.077689] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:24.715 [2024-07-13 16:25:56.077832] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:24.715 [2024-07-13 16:25:56.078016] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:24.715 [2024-07-13 16:25:56.078172] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:24.715 passed 00:08:24.715 Test: test_scsi2_reserve_release ...passed 00:08:24.715 Test: test_pr_with_scsi2_reserve_release ...[2024-07-13 16:25:56.078621] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:24.715 passed 00:08:24.715 00:08:24.715 Run Summary: Type Total Ran Passed Failed Inactive 00:08:24.715 suites 1 1 n/a 0 0 00:08:24.715 tests 7 7 7 0 0 00:08:24.715 asserts 257 257 257 0 n/a 00:08:24.715 00:08:24.715 Elapsed time = 0.002 seconds 00:08:24.715 00:08:24.715 real 0m0.218s 00:08:24.715 user 0m0.084s 00:08:24.715 sys 0m0.122s 00:08:24.715 16:25:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.715 16:25:56 -- common/autotest_common.sh@10 -- # set +x 00:08:24.715 ************************************ 00:08:24.715 END TEST unittest_scsi 00:08:24.715 ************************************ 00:08:24.715 16:25:56 -- unit/unittest.sh@276 -- # uname -s 00:08:24.715 16:25:56 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:08:24.715 16:25:56 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:08:24.715 16:25:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:24.715 16:25:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.715 16:25:56 -- common/autotest_common.sh@10 -- # set +x 00:08:24.715 ************************************ 00:08:24.715 START TEST unittest_sock 00:08:24.715 ************************************ 00:08:24.715 16:25:56 -- common/autotest_common.sh@1104 -- # unittest_sock 00:08:24.715 16:25:56 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:24.975 00:08:24.975 00:08:24.975 CUnit - A unit testing framework for C - Version 2.1-3 00:08:24.975 http://cunit.sourceforge.net/ 00:08:24.975 00:08:24.975 00:08:24.975 Suite: sock 00:08:24.975 Test: posix_sock ...passed 00:08:24.975 Test: ut_sock ...passed 00:08:24.975 Test: posix_sock_group ...passed 00:08:24.975 Test: ut_sock_group ...passed 00:08:24.975 Test: posix_sock_group_fairness ...passed 00:08:24.975 Test: _posix_sock_close ...passed 00:08:24.975 Test: sock_get_default_opts ...passed 00:08:24.975 Test: ut_sock_impl_get_set_opts ...passed 00:08:24.975 Test: posix_sock_impl_get_set_opts ...passed 00:08:24.975 Test: ut_sock_map ...passed 00:08:24.975 Test: override_impl_opts ...passed 00:08:24.975 Test: ut_sock_group_get_ctx ...passed 00:08:24.975 00:08:24.975 Run Summary: Type Total Ran Passed Failed Inactive 00:08:24.975 suites 1 1 n/a 0 0 00:08:24.975 tests 12 12 12 0 0 00:08:24.975 asserts 349 349 349 0 n/a 00:08:24.975 00:08:24.975 Elapsed time = 0.007 seconds 00:08:24.975 16:25:56 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:24.975 00:08:24.975 00:08:24.975 CUnit - A unit testing framework for C - Version 2.1-3 00:08:24.975 http://cunit.sourceforge.net/ 00:08:24.975 00:08:24.975 00:08:24.975 Suite: posix 00:08:24.975 Test: flush ...passed 00:08:24.975 00:08:24.975 Run Summary: Type Total Ran Passed Failed Inactive 00:08:24.975 suites 1 1 n/a 0 0 00:08:24.975 tests 1 1 1 0 0 00:08:24.975 asserts 28 28 28 0 n/a 00:08:24.975 00:08:24.975 Elapsed time = 0.000 seconds 00:08:24.975 16:25:56 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:24.975 00:08:24.975 real 0m0.118s 00:08:24.975 user 0m0.046s 00:08:24.975 sys 0m0.045s 00:08:24.975 16:25:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.975 16:25:56 -- common/autotest_common.sh@10 -- # set +x 00:08:24.975 ************************************ 00:08:24.975 END TEST unittest_sock 00:08:24.975 ************************************ 00:08:24.975 16:25:56 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:24.975 16:25:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:24.975 16:25:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.975 16:25:56 -- common/autotest_common.sh@10 -- # set +x 00:08:24.975 ************************************ 00:08:24.975 START TEST unittest_thread 00:08:24.975 ************************************ 00:08:24.975 16:25:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:24.975 00:08:24.975 00:08:24.975 CUnit - A unit testing framework for C - Version 2.1-3 00:08:24.975 http://cunit.sourceforge.net/ 00:08:24.975 00:08:24.975 00:08:24.975 Suite: io_channel 00:08:24.975 Test: thread_alloc ...passed 00:08:24.975 Test: thread_send_msg ...passed 00:08:24.975 Test: thread_poller ...passed 00:08:24.975 Test: poller_pause ...passed 00:08:24.975 Test: thread_for_each ...passed 00:08:24.975 Test: for_each_channel_remove ...passed 00:08:24.975 Test: for_each_channel_unreg ...[2024-07-13 16:25:56.415174] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffd40b569a0 already registered (old:0x613000000200 new:0x6130000003c0) 00:08:24.975 passed 00:08:24.975 Test: thread_name ...passed 00:08:24.975 Test: channel ...[2024-07-13 16:25:56.420235] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x55a5489060e0 00:08:24.975 passed 00:08:24.975 Test: channel_destroy_races ...passed 00:08:24.975 Test: thread_exit_test ...[2024-07-13 16:25:56.426304] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:08:24.975 passed 00:08:24.975 Test: thread_update_stats_test ...passed 00:08:24.975 Test: nested_channel ...passed 00:08:24.975 Test: device_unregister_and_thread_exit_race ...passed 00:08:24.975 Test: cache_closest_timed_poller ...passed 00:08:24.975 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:24.975 Test: io_device_lookup ...passed 00:08:24.976 Test: spdk_spin ...[2024-07-13 16:25:56.439461] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:24.976 [2024-07-13 16:25:56.439583] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd40b56990 00:08:24.976 [2024-07-13 16:25:56.439807] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:24.976 [2024-07-13 16:25:56.441696] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:24.976 [2024-07-13 16:25:56.441903] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd40b56990 00:08:24.976 [2024-07-13 16:25:56.442047] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:24.976 [2024-07-13 16:25:56.442191] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd40b56990 00:08:24.976 [2024-07-13 16:25:56.442319] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:24.976 [2024-07-13 16:25:56.442480] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd40b56990 00:08:24.976 [2024-07-13 16:25:56.442614] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:24.976 [2024-07-13 16:25:56.442799] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd40b56990 00:08:24.976 passed 00:08:25.235 Test: for_each_channel_and_thread_exit_race ...passed 00:08:25.235 Test: for_each_thread_and_thread_exit_race ...passed 00:08:25.235 00:08:25.235 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.235 suites 1 1 n/a 0 0 00:08:25.235 tests 20 20 20 0 0 00:08:25.235 asserts 409 409 409 0 n/a 00:08:25.235 00:08:25.235 Elapsed time = 0.055 seconds 00:08:25.235 00:08:25.235 real 0m0.116s 00:08:25.235 user 0m0.064s 00:08:25.235 sys 0m0.048s 00:08:25.235 16:25:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.235 16:25:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.235 ************************************ 00:08:25.235 END TEST unittest_thread 00:08:25.235 ************************************ 00:08:25.235 16:25:56 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:25.235 16:25:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:25.235 16:25:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.235 16:25:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.235 ************************************ 00:08:25.235 START TEST unittest_iobuf 00:08:25.235 ************************************ 00:08:25.235 16:25:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:25.235 00:08:25.235 00:08:25.235 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.235 http://cunit.sourceforge.net/ 00:08:25.235 00:08:25.235 00:08:25.235 Suite: io_channel 00:08:25.235 Test: iobuf ...passed 00:08:25.235 Test: iobuf_cache ...[2024-07-13 16:25:56.582104] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:25.235 [2024-07-13 16:25:56.582477] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:25.235 [2024-07-13 16:25:56.582688] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:25.235 [2024-07-13 16:25:56.582806] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:25.235 [2024-07-13 16:25:56.582907] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:25.235 [2024-07-13 16:25:56.583093] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:25.235 passed 00:08:25.235 00:08:25.235 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.235 suites 1 1 n/a 0 0 00:08:25.235 tests 2 2 2 0 0 00:08:25.236 asserts 107 107 107 0 n/a 00:08:25.236 00:08:25.236 Elapsed time = 0.005 seconds 00:08:25.236 00:08:25.236 real 0m0.049s 00:08:25.236 user 0m0.033s 00:08:25.236 sys 0m0.015s 00:08:25.236 16:25:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.236 16:25:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.236 ************************************ 00:08:25.236 END TEST unittest_iobuf 00:08:25.236 ************************************ 00:08:25.236 16:25:56 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:08:25.236 16:25:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:25.236 16:25:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.236 16:25:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.236 ************************************ 00:08:25.236 START TEST unittest_util 00:08:25.236 ************************************ 00:08:25.236 16:25:56 -- common/autotest_common.sh@1104 -- # unittest_util 00:08:25.236 16:25:56 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:25.236 00:08:25.236 00:08:25.236 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.236 http://cunit.sourceforge.net/ 00:08:25.236 00:08:25.236 00:08:25.236 Suite: base64 00:08:25.236 Test: test_base64_get_encoded_strlen ...passed 00:08:25.236 Test: test_base64_get_decoded_len ...passed 00:08:25.236 Test: test_base64_encode ...passed 00:08:25.236 Test: test_base64_decode ...passed 00:08:25.236 Test: test_base64_urlsafe_encode ...passed 00:08:25.236 Test: test_base64_urlsafe_decode ...passed 00:08:25.236 00:08:25.236 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.236 suites 1 1 n/a 0 0 00:08:25.236 tests 6 6 6 0 0 00:08:25.236 asserts 112 112 112 0 n/a 00:08:25.236 00:08:25.236 Elapsed time = 0.000 seconds 00:08:25.494 16:25:56 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:25.494 00:08:25.494 00:08:25.494 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.494 http://cunit.sourceforge.net/ 00:08:25.494 00:08:25.494 00:08:25.494 Suite: bit_array 00:08:25.494 Test: test_1bit ...passed 00:08:25.494 Test: test_64bit ...passed 00:08:25.494 Test: test_find ...passed 00:08:25.494 Test: test_resize ...passed 00:08:25.494 Test: test_errors ...passed 00:08:25.494 Test: test_count ...passed 00:08:25.494 Test: test_mask_store_load ...passed 00:08:25.494 Test: test_mask_clear ...passed 00:08:25.494 00:08:25.494 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.494 suites 1 1 n/a 0 0 00:08:25.494 tests 8 8 8 0 0 00:08:25.494 asserts 5075 5075 5075 0 n/a 00:08:25.494 00:08:25.494 Elapsed time = 0.002 seconds 00:08:25.494 16:25:56 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:25.494 00:08:25.495 00:08:25.495 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.495 http://cunit.sourceforge.net/ 00:08:25.495 00:08:25.495 00:08:25.495 Suite: cpuset 00:08:25.495 Test: test_cpuset ...passed 00:08:25.495 Test: test_cpuset_parse ...[2024-07-13 16:25:56.781949] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:25.495 [2024-07-13 16:25:56.782443] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:25.495 [2024-07-13 16:25:56.782668] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:25.495 [2024-07-13 16:25:56.782881] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:25.495 [2024-07-13 16:25:56.783031] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:25.495 [2024-07-13 16:25:56.783179] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:25.495 [2024-07-13 16:25:56.783264] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:25.495 [2024-07-13 16:25:56.783420] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:25.495 passed 00:08:25.495 Test: test_cpuset_fmt ...passed 00:08:25.495 00:08:25.495 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.495 suites 1 1 n/a 0 0 00:08:25.495 tests 3 3 3 0 0 00:08:25.495 asserts 65 65 65 0 n/a 00:08:25.495 00:08:25.495 Elapsed time = 0.002 seconds 00:08:25.495 16:25:56 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:25.495 00:08:25.495 00:08:25.495 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.495 http://cunit.sourceforge.net/ 00:08:25.495 00:08:25.495 00:08:25.495 Suite: crc16 00:08:25.495 Test: test_crc16_t10dif ...passed 00:08:25.495 Test: test_crc16_t10dif_seed ...passed 00:08:25.495 Test: test_crc16_t10dif_copy ...passed 00:08:25.495 00:08:25.495 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.495 suites 1 1 n/a 0 0 00:08:25.495 tests 3 3 3 0 0 00:08:25.495 asserts 5 5 5 0 n/a 00:08:25.495 00:08:25.495 Elapsed time = 0.000 seconds 00:08:25.495 16:25:56 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:25.495 00:08:25.495 00:08:25.495 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.495 http://cunit.sourceforge.net/ 00:08:25.495 00:08:25.495 00:08:25.495 Suite: crc32_ieee 00:08:25.495 Test: test_crc32_ieee ...passed 00:08:25.495 00:08:25.495 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.495 suites 1 1 n/a 0 0 00:08:25.495 tests 1 1 1 0 0 00:08:25.495 asserts 1 1 1 0 n/a 00:08:25.495 00:08:25.495 Elapsed time = 0.000 seconds 00:08:25.495 16:25:56 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:25.495 00:08:25.495 00:08:25.495 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.495 http://cunit.sourceforge.net/ 00:08:25.495 00:08:25.495 00:08:25.495 Suite: crc32c 00:08:25.495 Test: test_crc32c ...passed 00:08:25.495 Test: test_crc32c_nvme ...passed 00:08:25.495 00:08:25.495 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.495 suites 1 1 n/a 0 0 00:08:25.495 tests 2 2 2 0 0 00:08:25.495 asserts 16 16 16 0 n/a 00:08:25.495 00:08:25.495 Elapsed time = 0.000 seconds 00:08:25.495 16:25:56 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:25.495 00:08:25.495 00:08:25.495 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.495 http://cunit.sourceforge.net/ 00:08:25.495 00:08:25.495 00:08:25.495 Suite: crc64 00:08:25.495 Test: test_crc64_nvme ...passed 00:08:25.495 00:08:25.495 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.495 suites 1 1 n/a 0 0 00:08:25.495 tests 1 1 1 0 0 00:08:25.495 asserts 4 4 4 0 n/a 00:08:25.495 00:08:25.495 Elapsed time = 0.000 seconds 00:08:25.495 16:25:56 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:25.756 00:08:25.756 00:08:25.756 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.756 http://cunit.sourceforge.net/ 00:08:25.756 00:08:25.756 00:08:25.756 Suite: string 00:08:25.756 Test: test_parse_ip_addr ...passed 00:08:25.756 Test: test_str_chomp ...passed 00:08:25.756 Test: test_parse_capacity ...passed 00:08:25.756 Test: test_sprintf_append_realloc ...passed 00:08:25.756 Test: test_strtol ...passed 00:08:25.756 Test: test_strtoll ...passed 00:08:25.756 Test: test_strarray ...passed 00:08:25.756 Test: test_strcpy_replace ...passed 00:08:25.756 00:08:25.756 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.756 suites 1 1 n/a 0 0 00:08:25.756 tests 8 8 8 0 0 00:08:25.756 asserts 161 161 161 0 n/a 00:08:25.756 00:08:25.756 Elapsed time = 0.001 seconds 00:08:25.756 16:25:57 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:25.756 00:08:25.756 00:08:25.756 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.756 http://cunit.sourceforge.net/ 00:08:25.756 00:08:25.756 00:08:25.756 Suite: dif 00:08:25.756 Test: dif_generate_and_verify_test ...[2024-07-13 16:25:57.026662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:25.756 [2024-07-13 16:25:57.027362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:25.756 [2024-07-13 16:25:57.027781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:25.756 [2024-07-13 16:25:57.028180] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:25.756 [2024-07-13 16:25:57.028602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:25.756 [2024-07-13 16:25:57.029008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:25.756 passed 00:08:25.756 Test: dif_disable_check_test ...[2024-07-13 16:25:57.030340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:25.756 [2024-07-13 16:25:57.030819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:25.756 [2024-07-13 16:25:57.031220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:25.756 passed 00:08:25.756 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-13 16:25:57.032700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:25.756 [2024-07-13 16:25:57.033130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:25.756 [2024-07-13 16:25:57.033589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:25.756 [2024-07-13 16:25:57.034084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:25.756 [2024-07-13 16:25:57.034525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:25.756 [2024-07-13 16:25:57.034956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:25.756 [2024-07-13 16:25:57.035378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:25.756 [2024-07-13 16:25:57.035795] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:25.756 [2024-07-13 16:25:57.036220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:25.756 [2024-07-13 16:25:57.036692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:25.756 [2024-07-13 16:25:57.037137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:25.756 passed 00:08:25.756 Test: dif_apptag_mask_test ...[2024-07-13 16:25:57.037729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:25.756 [2024-07-13 16:25:57.038127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:25.756 passed 00:08:25.756 Test: dif_sec_512_md_0_error_test ...[2024-07-13 16:25:57.038585] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:25.756 passed 00:08:25.756 Test: dif_sec_4096_md_0_error_test ...[2024-07-13 16:25:57.038929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:25.757 [2024-07-13 16:25:57.039078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:25.757 passed 00:08:25.757 Test: dif_sec_4100_md_128_error_test ...[2024-07-13 16:25:57.039246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:25.757 passed 00:08:25.757 Test: dif_guard_seed_test ...passed[2024-07-13 16:25:57.039418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:25.757 00:08:25.757 Test: dif_guard_value_test ...passed 00:08:25.757 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:25.757 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:25.757 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:25.757 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:25.757 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:25.757 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:25.757 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:25.757 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:25.757 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:25.757 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:25.757 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:25.757 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:25.757 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:25.757 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:25.757 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:25.757 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:25.757 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:25.757 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:25.757 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-13 16:25:57.086744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=7d4c, Actual=fd4c 00:08:25.757 [2024-07-13 16:25:57.089376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=7e21, Actual=fe21 00:08:25.757 [2024-07-13 16:25:57.091977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.094577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.097196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:25.757 [2024-07-13 16:25:57.099780] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:25.757 [2024-07-13 16:25:57.102391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=9e86 00:08:25.757 [2024-07-13 16:25:57.104927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fe21, Actual=6c9a 00:08:25.757 [2024-07-13 16:25:57.107466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab7d3ed, Actual=1ab753ed 00:08:25.757 [2024-07-13 16:25:57.110074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3857c660, Actual=38574660 00:08:25.757 [2024-07-13 16:25:57.112699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.115273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.117874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:25.757 [2024-07-13 16:25:57.120461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:25.757 [2024-07-13 16:25:57.123043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=11666fdd 00:08:25.757 [2024-07-13 16:25:57.125593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=38574660, Actual=1e98ac83 00:08:25.757 [2024-07-13 16:25:57.128155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:08:25.757 [2024-07-13 16:25:57.130779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d48372266, Actual=88010a2d4837a266 00:08:25.757 [2024-07-13 16:25:57.133389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.135975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.138561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:25.757 [2024-07-13 16:25:57.141159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:25.757 [2024-07-13 16:25:57.143783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=3e2c53799b38b0eb 00:08:25.757 [2024-07-13 16:25:57.146368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d4837a266, Actual=f820dcf16b1dc407 00:08:25.757 passed 00:08:25.757 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-13 16:25:57.148219] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:08:25.757 [2024-07-13 16:25:57.148654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:08:25.757 [2024-07-13 16:25:57.149062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.149485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.149926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.757 [2024-07-13 16:25:57.150335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.757 [2024-07-13 16:25:57.150757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9e86 00:08:25.757 [2024-07-13 16:25:57.151107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=6c9a 00:08:25.757 [2024-07-13 16:25:57.151470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab7d3ed, Actual=1ab753ed 00:08:25.757 [2024-07-13 16:25:57.151882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3857c660, Actual=38574660 00:08:25.757 [2024-07-13 16:25:57.152341] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.152765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.153182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.757 [2024-07-13 16:25:57.153604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.757 [2024-07-13 16:25:57.154009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=11666fdd 00:08:25.757 [2024-07-13 16:25:57.154363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1e98ac83 00:08:25.757 [2024-07-13 16:25:57.154750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:08:25.757 [2024-07-13 16:25:57.155148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48372266, Actual=88010a2d4837a266 00:08:25.757 [2024-07-13 16:25:57.155562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.155979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.156402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.757 [2024-07-13 16:25:57.156814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.757 [2024-07-13 16:25:57.157247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3e2c53799b38b0eb 00:08:25.757 [2024-07-13 16:25:57.157647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f820dcf16b1dc407 00:08:25.757 passed 00:08:25.757 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-13 16:25:57.158198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:08:25.757 [2024-07-13 16:25:57.158640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:08:25.757 [2024-07-13 16:25:57.159057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.159470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.159900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.757 [2024-07-13 16:25:57.160336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.757 [2024-07-13 16:25:57.160776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9e86 00:08:25.757 [2024-07-13 16:25:57.161147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=6c9a 00:08:25.757 [2024-07-13 16:25:57.161519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab7d3ed, Actual=1ab753ed 00:08:25.757 [2024-07-13 16:25:57.161933] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3857c660, Actual=38574660 00:08:25.757 [2024-07-13 16:25:57.162345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.162758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.163169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.757 [2024-07-13 16:25:57.163570] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.757 [2024-07-13 16:25:57.163979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=11666fdd 00:08:25.757 [2024-07-13 16:25:57.164355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1e98ac83 00:08:25.757 [2024-07-13 16:25:57.164749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:08:25.757 [2024-07-13 16:25:57.165158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48372266, Actual=88010a2d4837a266 00:08:25.757 [2024-07-13 16:25:57.165587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.757 [2024-07-13 16:25:57.166050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.166457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.166874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.167302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3e2c53799b38b0eb 00:08:25.758 [2024-07-13 16:25:57.167660] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f820dcf16b1dc407 00:08:25.758 passed 00:08:25.758 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-13 16:25:57.168211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:08:25.758 [2024-07-13 16:25:57.168664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:08:25.758 [2024-07-13 16:25:57.169089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.169505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.169943] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.170350] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.170766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9e86 00:08:25.758 [2024-07-13 16:25:57.171114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=6c9a 00:08:25.758 [2024-07-13 16:25:57.171428] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab7d3ed, Actual=1ab753ed 00:08:25.758 [2024-07-13 16:25:57.171832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3857c660, Actual=38574660 00:08:25.758 [2024-07-13 16:25:57.172281] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.172700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.173108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.173537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.173950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=11666fdd 00:08:25.758 [2024-07-13 16:25:57.174307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1e98ac83 00:08:25.758 [2024-07-13 16:25:57.174668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:08:25.758 [2024-07-13 16:25:57.175080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48372266, Actual=88010a2d4837a266 00:08:25.758 [2024-07-13 16:25:57.175483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.175887] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.176304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.176662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.177149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3e2c53799b38b0eb 00:08:25.758 [2024-07-13 16:25:57.177527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f820dcf16b1dc407 00:08:25.758 passed 00:08:25.758 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-13 16:25:57.178084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:08:25.758 [2024-07-13 16:25:57.178487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:08:25.758 [2024-07-13 16:25:57.178897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.179310] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.179742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.180149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.180580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9e86 00:08:25.758 [2024-07-13 16:25:57.180944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=6c9a 00:08:25.758 passed 00:08:25.758 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-13 16:25:57.181517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab7d3ed, Actual=1ab753ed 00:08:25.758 [2024-07-13 16:25:57.181926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3857c660, Actual=38574660 00:08:25.758 [2024-07-13 16:25:57.182344] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.182751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.183153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.183560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.183956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=11666fdd 00:08:25.758 [2024-07-13 16:25:57.184329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1e98ac83 00:08:25.758 [2024-07-13 16:25:57.184755] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:08:25.758 [2024-07-13 16:25:57.185174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48372266, Actual=88010a2d4837a266 00:08:25.758 [2024-07-13 16:25:57.185590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.186004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.186399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.186814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.187252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3e2c53799b38b0eb 00:08:25.758 [2024-07-13 16:25:57.187613] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f820dcf16b1dc407 00:08:25.758 passed 00:08:25.758 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-13 16:25:57.188160] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:08:25.758 [2024-07-13 16:25:57.188605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:08:25.758 [2024-07-13 16:25:57.189016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.189435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.189885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.190296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.190711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9e86 00:08:25.758 [2024-07-13 16:25:57.191071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=6c9a 00:08:25.758 passed 00:08:25.758 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-13 16:25:57.191611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab7d3ed, Actual=1ab753ed 00:08:25.758 [2024-07-13 16:25:57.192016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3857c660, Actual=38574660 00:08:25.758 [2024-07-13 16:25:57.192470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.192889] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.193308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.193717] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.194132] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=11666fdd 00:08:25.758 [2024-07-13 16:25:57.194485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1e98ac83 00:08:25.758 [2024-07-13 16:25:57.194902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:08:25.758 [2024-07-13 16:25:57.195333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48372266, Actual=88010a2d4837a266 00:08:25.758 [2024-07-13 16:25:57.195744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.196143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:08:25.758 [2024-07-13 16:25:57.196562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.196963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:08:25.758 [2024-07-13 16:25:57.197392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3e2c53799b38b0eb 00:08:25.758 [2024-07-13 16:25:57.197753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f820dcf16b1dc407 00:08:25.758 passed 00:08:25.758 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:25.758 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:25.758 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:26.018 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:26.018 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:26.018 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:26.018 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:26.018 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:26.018 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:26.018 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-13 16:25:57.252599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=7d4c, Actual=fd4c 00:08:26.019 [2024-07-13 16:25:57.253887] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5002, Actual=d002 00:08:26.019 [2024-07-13 16:25:57.255118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.256339] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.257586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:26.019 [2024-07-13 16:25:57.258806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:26.019 [2024-07-13 16:25:57.260027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=9e86 00:08:26.019 [2024-07-13 16:25:57.261264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=c9ac 00:08:26.019 [2024-07-13 16:25:57.262500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab7d3ed, Actual=1ab753ed 00:08:26.019 [2024-07-13 16:25:57.263725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3b14b204, Actual=3b143204 00:08:26.019 [2024-07-13 16:25:57.264977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.266247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.267475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:26.019 [2024-07-13 16:25:57.268722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:26.019 [2024-07-13 16:25:57.269958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=11666fdd 00:08:26.019 [2024-07-13 16:25:57.271186] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=23c272dc 00:08:26.019 [2024-07-13 16:25:57.272422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:08:26.019 [2024-07-13 16:25:57.273694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=29afd93d060ed1a, Actual=29afd93d0606d1a 00:08:26.019 [2024-07-13 16:25:57.274927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.276154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.277400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:26.019 [2024-07-13 16:25:57.278635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:26.019 [2024-07-13 16:25:57.279847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=3e2c53799b38b0eb 00:08:26.019 [2024-07-13 16:25:57.281120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3cc902869290d092, Actual=4ce8d45ab1bab6f3 00:08:26.019 passed 00:08:26.019 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-13 16:25:57.281738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7d4c, Actual=fd4c 00:08:26.019 [2024-07-13 16:25:57.282112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=99d5, Actual=19d5 00:08:26.019 [2024-07-13 16:25:57.282498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.282878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.283276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:08:26.019 [2024-07-13 16:25:57.283691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:08:26.019 [2024-07-13 16:25:57.284060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=9e86 00:08:26.019 [2024-07-13 16:25:57.284462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=7b 00:08:26.019 [2024-07-13 16:25:57.284845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab7d3ed, Actual=1ab753ed 00:08:26.019 [2024-07-13 16:25:57.285234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a5a47a56, Actual=a5a4fa56 00:08:26.019 [2024-07-13 16:25:57.285641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.286022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.286408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:08:26.019 [2024-07-13 16:25:57.286790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:08:26.019 [2024-07-13 16:25:57.287167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=11666fdd 00:08:26.019 [2024-07-13 16:25:57.287546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=bd72ba8e 00:08:26.019 [2024-07-13 16:25:57.287955] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:08:26.019 [2024-07-13 16:25:57.288353] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=767b1f7e0bb408e7, Actual=767b1f7e0bb488e7 00:08:26.019 [2024-07-13 16:25:57.288753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.289139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.289537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:08:26.019 [2024-07-13 16:25:57.289922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:08:26.019 [2024-07-13 16:25:57.290328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=3e2c53799b38b0eb 00:08:26.019 [2024-07-13 16:25:57.290711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=380936b76a6e530e 00:08:26.019 passed 00:08:26.019 Test: dix_sec_512_md_0_error ...[2024-07-13 16:25:57.291054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:26.019 passed 00:08:26.019 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:08:26.019 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:26.019 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:26.019 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:26.019 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:26.019 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:26.019 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:26.019 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:26.019 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:26.019 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-13 16:25:57.336725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=7d4c, Actual=fd4c 00:08:26.019 [2024-07-13 16:25:57.338025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5002, Actual=d002 00:08:26.019 [2024-07-13 16:25:57.339259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.340489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.341754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:26.019 [2024-07-13 16:25:57.342980] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:26.019 [2024-07-13 16:25:57.344186] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=9e86 00:08:26.019 [2024-07-13 16:25:57.345442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=c9ac 00:08:26.019 [2024-07-13 16:25:57.346646] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab7d3ed, Actual=1ab753ed 00:08:26.019 [2024-07-13 16:25:57.347855] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3b14b204, Actual=3b143204 00:08:26.019 [2024-07-13 16:25:57.349107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:26.019 [2024-07-13 16:25:57.350357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:26.020 [2024-07-13 16:25:57.351583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:26.020 [2024-07-13 16:25:57.352818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:26.020 [2024-07-13 16:25:57.354053] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=11666fdd 00:08:26.020 [2024-07-13 16:25:57.355279] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=23c272dc 00:08:26.020 [2024-07-13 16:25:57.356539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:08:26.020 [2024-07-13 16:25:57.357772] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=29afd93d060ed1a, Actual=29afd93d0606d1a 00:08:26.020 [2024-07-13 16:25:57.358992] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:26.020 [2024-07-13 16:25:57.360207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=8088 00:08:26.020 [2024-07-13 16:25:57.361447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:26.020 [2024-07-13 16:25:57.362666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8061 00:08:26.020 [2024-07-13 16:25:57.363912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=3e2c53799b38b0eb 00:08:26.020 [2024-07-13 16:25:57.365146] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3cc902869290d092, Actual=4ce8d45ab1bab6f3 00:08:26.020 passed 00:08:26.020 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-13 16:25:57.365838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7d4c, Actual=fd4c 00:08:26.020 [2024-07-13 16:25:57.366209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=99d5, Actual=19d5 00:08:26.020 [2024-07-13 16:25:57.366599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:08:26.020 [2024-07-13 16:25:57.366984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:08:26.020 [2024-07-13 16:25:57.367393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:08:26.020 [2024-07-13 16:25:57.367785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:08:26.020 [2024-07-13 16:25:57.368159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=9e86 00:08:26.020 [2024-07-13 16:25:57.368562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=7b 00:08:26.020 [2024-07-13 16:25:57.368948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab7d3ed, Actual=1ab753ed 00:08:26.020 [2024-07-13 16:25:57.369341] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a5a47a56, Actual=a5a4fa56 00:08:26.020 [2024-07-13 16:25:57.369760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:08:26.020 [2024-07-13 16:25:57.370138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:08:26.020 [2024-07-13 16:25:57.370518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:08:26.020 [2024-07-13 16:25:57.370910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:08:26.020 [2024-07-13 16:25:57.371283] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=11666fdd 00:08:26.020 [2024-07-13 16:25:57.371667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=bd72ba8e 00:08:26.020 [2024-07-13 16:25:57.372051] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3 00:08:26.020 [2024-07-13 16:25:57.372459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=767b1f7e0bb408e7, Actual=767b1f7e0bb488e7 00:08:26.020 [2024-07-13 16:25:57.372833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:08:26.020 [2024-07-13 16:25:57.373215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=8088 00:08:26.020 [2024-07-13 16:25:57.373601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:08:26.020 [2024-07-13 16:25:57.373986] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8059 00:08:26.020 [2024-07-13 16:25:57.374372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=3e2c53799b38b0eb 00:08:26.020 [2024-07-13 16:25:57.374760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=380936b76a6e530e 00:08:26.020 passed 00:08:26.020 Test: set_md_interleave_iovs_test ...passed 00:08:26.020 Test: set_md_interleave_iovs_split_test ...passed 00:08:26.020 Test: dif_generate_stream_pi_16_test ...passed 00:08:26.020 Test: dif_generate_stream_test ...passed 00:08:26.020 Test: set_md_interleave_iovs_alignment_test ...[2024-07-13 16:25:57.383608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:26.020 passed 00:08:26.020 Test: dif_generate_split_test ...passed 00:08:26.020 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:26.020 Test: dif_verify_split_test ...passed 00:08:26.020 Test: dif_verify_stream_multi_segments_test ...passed 00:08:26.020 Test: update_crc32c_pi_16_test ...passed 00:08:26.020 Test: update_crc32c_test ...passed 00:08:26.020 Test: dif_update_crc32c_split_test ...passed 00:08:26.020 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:26.020 Test: get_range_with_md_test ...passed 00:08:26.020 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:26.020 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:26.020 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:26.020 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:26.020 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:26.020 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:26.020 Test: dif_generate_and_verify_unmap_test ...passed 00:08:26.020 00:08:26.020 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.020 suites 1 1 n/a 0 0 00:08:26.020 tests 79 79 79 0 0 00:08:26.020 asserts 3584 3584 3584 0 n/a 00:08:26.020 00:08:26.020 Elapsed time = 0.355 seconds 00:08:26.020 16:25:57 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:26.020 00:08:26.020 00:08:26.020 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.020 http://cunit.sourceforge.net/ 00:08:26.020 00:08:26.020 00:08:26.020 Suite: iov 00:08:26.020 Test: test_single_iov ...passed 00:08:26.020 Test: test_simple_iov ...passed 00:08:26.020 Test: test_complex_iov ...passed 00:08:26.020 Test: test_iovs_to_buf ...passed 00:08:26.020 Test: test_buf_to_iovs ...passed 00:08:26.020 Test: test_memset ...passed 00:08:26.020 Test: test_iov_one ...passed 00:08:26.020 Test: test_iov_xfer ...passed 00:08:26.020 00:08:26.020 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.020 suites 1 1 n/a 0 0 00:08:26.020 tests 8 8 8 0 0 00:08:26.020 asserts 156 156 156 0 n/a 00:08:26.020 00:08:26.020 Elapsed time = 0.000 seconds 00:08:26.020 16:25:57 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:26.279 00:08:26.279 00:08:26.279 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.279 http://cunit.sourceforge.net/ 00:08:26.279 00:08:26.279 00:08:26.279 Suite: math 00:08:26.279 Test: test_serial_number_arithmetic ...passed 00:08:26.279 Suite: erase 00:08:26.279 Test: test_memset_s ...passed 00:08:26.279 00:08:26.279 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.279 suites 2 2 n/a 0 0 00:08:26.279 tests 2 2 2 0 0 00:08:26.279 asserts 18 18 18 0 n/a 00:08:26.279 00:08:26.279 Elapsed time = 0.000 seconds 00:08:26.279 16:25:57 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:26.279 00:08:26.279 00:08:26.279 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.279 http://cunit.sourceforge.net/ 00:08:26.279 00:08:26.279 00:08:26.279 Suite: pipe 00:08:26.279 Test: test_create_destroy ...passed 00:08:26.279 Test: test_write_get_buffer ...passed 00:08:26.279 Test: test_write_advance ...passed 00:08:26.279 Test: test_read_get_buffer ...passed 00:08:26.279 Test: test_read_advance ...passed 00:08:26.279 Test: test_data ...passed 00:08:26.279 00:08:26.279 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.279 suites 1 1 n/a 0 0 00:08:26.279 tests 6 6 6 0 0 00:08:26.279 asserts 250 250 250 0 n/a 00:08:26.279 00:08:26.279 Elapsed time = 0.000 seconds 00:08:26.279 16:25:57 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:26.279 00:08:26.279 00:08:26.279 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.279 http://cunit.sourceforge.net/ 00:08:26.279 00:08:26.279 00:08:26.279 Suite: xor 00:08:26.279 Test: test_xor_gen ...passed 00:08:26.279 00:08:26.279 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.279 suites 1 1 n/a 0 0 00:08:26.279 tests 1 1 1 0 0 00:08:26.279 asserts 17 17 17 0 n/a 00:08:26.279 00:08:26.279 Elapsed time = 0.007 seconds 00:08:26.279 00:08:26.279 real 0m0.924s 00:08:26.279 user 0m0.603s 00:08:26.279 sys 0m0.277s 00:08:26.279 16:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.279 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.279 ************************************ 00:08:26.279 END TEST unittest_util 00:08:26.279 ************************************ 00:08:26.279 16:25:57 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:26.279 16:25:57 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:26.279 16:25:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:26.279 16:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.280 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.280 ************************************ 00:08:26.280 START TEST unittest_vhost 00:08:26.280 ************************************ 00:08:26.280 16:25:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:26.280 00:08:26.280 00:08:26.280 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.280 http://cunit.sourceforge.net/ 00:08:26.280 00:08:26.280 00:08:26.280 Suite: vhost_suite 00:08:26.280 Test: desc_to_iov_test ...[2024-07-13 16:25:57.709339] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:26.280 passed 00:08:26.280 Test: create_controller_test ...[2024-07-13 16:25:57.715197] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:26.280 [2024-07-13 16:25:57.715595] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:26.280 [2024-07-13 16:25:57.715980] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:26.280 [2024-07-13 16:25:57.716373] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:26.280 [2024-07-13 16:25:57.716708] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:26.280 [2024-07-13 16:25:57.717089] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-07-13 16:25:57.718708] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:26.280 passed 00:08:26.280 Test: session_find_by_vid_test ...passed 00:08:26.280 Test: remove_controller_test ...[2024-07-13 16:25:57.721731] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:26.280 passed 00:08:26.280 Test: vq_avail_ring_get_test ...passed 00:08:26.280 Test: vq_packed_ring_test ...passed 00:08:26.280 Test: vhost_blk_construct_test ...passed 00:08:26.280 00:08:26.280 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.280 suites 1 1 n/a 0 0 00:08:26.280 tests 7 7 7 0 0 00:08:26.280 asserts 145 145 145 0 n/a 00:08:26.280 00:08:26.280 Elapsed time = 0.014 seconds 00:08:26.539 00:08:26.539 real 0m0.066s 00:08:26.539 user 0m0.025s 00:08:26.539 sys 0m0.037s 00:08:26.539 16:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.539 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.539 ************************************ 00:08:26.539 END TEST unittest_vhost 00:08:26.539 ************************************ 00:08:26.539 16:25:57 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:26.539 16:25:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:26.539 16:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.539 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.539 ************************************ 00:08:26.539 START TEST unittest_dma 00:08:26.539 ************************************ 00:08:26.539 16:25:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:26.539 00:08:26.539 00:08:26.539 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.539 http://cunit.sourceforge.net/ 00:08:26.539 00:08:26.539 00:08:26.539 Suite: dma_suite 00:08:26.539 Test: test_dma ...[2024-07-13 16:25:57.841354] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:26.539 passed 00:08:26.539 00:08:26.539 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.539 suites 1 1 n/a 0 0 00:08:26.539 tests 1 1 1 0 0 00:08:26.539 asserts 50 50 50 0 n/a 00:08:26.539 00:08:26.539 Elapsed time = 0.001 seconds 00:08:26.539 00:08:26.539 real 0m0.040s 00:08:26.539 user 0m0.008s 00:08:26.539 sys 0m0.032s 00:08:26.539 16:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.539 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.539 ************************************ 00:08:26.539 END TEST unittest_dma 00:08:26.539 ************************************ 00:08:26.539 16:25:57 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:08:26.539 16:25:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:26.539 16:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.539 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.539 ************************************ 00:08:26.539 START TEST unittest_init 00:08:26.539 ************************************ 00:08:26.539 16:25:57 -- common/autotest_common.sh@1104 -- # unittest_init 00:08:26.539 16:25:57 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:26.539 00:08:26.539 00:08:26.539 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.539 http://cunit.sourceforge.net/ 00:08:26.539 00:08:26.539 00:08:26.539 Suite: subsystem_suite 00:08:26.539 Test: subsystem_sort_test_depends_on_single ...passed 00:08:26.539 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:26.539 Test: subsystem_sort_test_missing_dependency ...[2024-07-13 16:25:57.961602] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:26.539 [2024-07-13 16:25:57.962309] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:26.539 passed 00:08:26.539 00:08:26.539 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.539 suites 1 1 n/a 0 0 00:08:26.539 tests 3 3 3 0 0 00:08:26.539 asserts 20 20 20 0 n/a 00:08:26.539 00:08:26.539 Elapsed time = 0.001 seconds 00:08:26.539 00:08:26.539 real 0m0.052s 00:08:26.539 user 0m0.021s 00:08:26.539 sys 0m0.029s 00:08:26.539 16:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.539 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.539 ************************************ 00:08:26.539 END TEST unittest_init 00:08:26.539 ************************************ 00:08:26.811 16:25:58 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:08:26.811 16:25:58 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:26.811 16:25:58 -- unit/unittest.sh@290 -- # hostname 00:08:26.811 16:25:58 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:27.092 geninfo: WARNING: invalid characters removed from testname! 00:08:53.675 16:26:22 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:55.053 16:26:26 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:57.633 16:26:28 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:00.167 16:26:31 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:02.697 16:26:33 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:04.595 16:26:35 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:07.252 16:26:38 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:09.224 16:26:40 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:09.224 16:26:40 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:09.792 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:09.792 Found 309 entries. 00:09:09.792 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:09:09.792 Writing .css and .png files. 00:09:09.792 Generating output. 00:09:09.792 Processing file include/linux/virtio_ring.h 00:09:10.050 Processing file include/spdk/mmio.h 00:09:10.050 Processing file include/spdk/thread.h 00:09:10.050 Processing file include/spdk/nvmf_transport.h 00:09:10.050 Processing file include/spdk/histogram_data.h 00:09:10.050 Processing file include/spdk/endian.h 00:09:10.050 Processing file include/spdk/base64.h 00:09:10.050 Processing file include/spdk/trace.h 00:09:10.050 Processing file include/spdk/nvme.h 00:09:10.050 Processing file include/spdk/bdev_module.h 00:09:10.050 Processing file include/spdk/util.h 00:09:10.050 Processing file include/spdk/nvme_spec.h 00:09:10.308 Processing file include/spdk_internal/virtio.h 00:09:10.308 Processing file include/spdk_internal/sock.h 00:09:10.308 Processing file include/spdk_internal/utf.h 00:09:10.308 Processing file include/spdk_internal/rdma.h 00:09:10.308 Processing file include/spdk_internal/nvme_tcp.h 00:09:10.308 Processing file include/spdk_internal/sgl.h 00:09:10.308 Processing file lib/accel/accel_rpc.c 00:09:10.308 Processing file lib/accel/accel.c 00:09:10.308 Processing file lib/accel/accel_sw.c 00:09:10.566 Processing file lib/bdev/scsi_nvme.c 00:09:10.566 Processing file lib/bdev/bdev_zone.c 00:09:10.566 Processing file lib/bdev/bdev.c 00:09:10.566 Processing file lib/bdev/part.c 00:09:10.566 Processing file lib/bdev/bdev_rpc.c 00:09:10.823 Processing file lib/blob/blobstore.c 00:09:10.824 Processing file lib/blob/blobstore.h 00:09:10.824 Processing file lib/blob/blob_bs_dev.c 00:09:10.824 Processing file lib/blob/zeroes.c 00:09:10.824 Processing file lib/blob/request.c 00:09:11.082 Processing file lib/blobfs/blobfs.c 00:09:11.082 Processing file lib/blobfs/tree.c 00:09:11.082 Processing file lib/conf/conf.c 00:09:11.082 Processing file lib/dma/dma.c 00:09:11.339 Processing file lib/env_dpdk/threads.c 00:09:11.339 Processing file lib/env_dpdk/pci_ioat.c 00:09:11.339 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:09:11.339 Processing file lib/env_dpdk/sigbus_handler.c 00:09:11.339 Processing file lib/env_dpdk/init.c 00:09:11.339 Processing file lib/env_dpdk/pci_event.c 00:09:11.339 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:09:11.339 Processing file lib/env_dpdk/pci_dpdk.c 00:09:11.339 Processing file lib/env_dpdk/pci_virtio.c 00:09:11.339 Processing file lib/env_dpdk/memory.c 00:09:11.339 Processing file lib/env_dpdk/pci_idxd.c 00:09:11.339 Processing file lib/env_dpdk/pci.c 00:09:11.339 Processing file lib/env_dpdk/env.c 00:09:11.339 Processing file lib/env_dpdk/pci_vmd.c 00:09:11.596 Processing file lib/event/scheduler_static.c 00:09:11.596 Processing file lib/event/app_rpc.c 00:09:11.596 Processing file lib/event/log_rpc.c 00:09:11.596 Processing file lib/event/app.c 00:09:11.596 Processing file lib/event/reactor.c 00:09:11.853 Processing file lib/ftl/ftl_nv_cache_io.h 00:09:11.853 Processing file lib/ftl/ftl_core.h 00:09:11.853 Processing file lib/ftl/ftl_io.h 00:09:11.853 Processing file lib/ftl/ftl_init.c 00:09:11.853 Processing file lib/ftl/ftl_reloc.c 00:09:11.853 Processing file lib/ftl/ftl_sb.c 00:09:11.853 Processing file lib/ftl/ftl_band.c 00:09:11.853 Processing file lib/ftl/ftl_band.h 00:09:11.854 Processing file lib/ftl/ftl_writer.c 00:09:11.854 Processing file lib/ftl/ftl_layout.c 00:09:11.854 Processing file lib/ftl/ftl_l2p.c 00:09:11.854 Processing file lib/ftl/ftl_nv_cache.c 00:09:11.854 Processing file lib/ftl/ftl_debug.h 00:09:11.854 Processing file lib/ftl/ftl_nv_cache.h 00:09:11.854 Processing file lib/ftl/ftl_p2l.c 00:09:11.854 Processing file lib/ftl/ftl_debug.c 00:09:11.854 Processing file lib/ftl/ftl_writer.h 00:09:11.854 Processing file lib/ftl/ftl_l2p_flat.c 00:09:11.854 Processing file lib/ftl/ftl_io.c 00:09:11.854 Processing file lib/ftl/ftl_band_ops.c 00:09:11.854 Processing file lib/ftl/ftl_trace.c 00:09:11.854 Processing file lib/ftl/ftl_rq.c 00:09:11.854 Processing file lib/ftl/ftl_core.c 00:09:11.854 Processing file lib/ftl/ftl_l2p_cache.c 00:09:12.111 Processing file lib/ftl/base/ftl_base_dev.c 00:09:12.111 Processing file lib/ftl/base/ftl_base_bdev.c 00:09:12.369 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:09:12.369 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:09:12.369 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:09:12.369 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:09:12.369 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:09:12.369 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:09:12.369 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:09:12.369 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:09:12.369 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:09:12.369 Processing file lib/ftl/mngt/ftl_mngt.c 00:09:12.369 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:09:12.369 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:09:12.369 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:09:12.369 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:09:12.369 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:09:12.628 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:09:12.628 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:09:12.628 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:09:12.628 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:09:12.628 Processing file lib/ftl/utils/ftl_bitmap.c 00:09:12.628 Processing file lib/ftl/utils/ftl_addr_utils.h 00:09:12.628 Processing file lib/ftl/utils/ftl_property.h 00:09:12.628 Processing file lib/ftl/utils/ftl_conf.c 00:09:12.628 Processing file lib/ftl/utils/ftl_df.h 00:09:12.628 Processing file lib/ftl/utils/ftl_md.c 00:09:12.628 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:09:12.628 Processing file lib/ftl/utils/ftl_property.c 00:09:12.628 Processing file lib/ftl/utils/ftl_mempool.c 00:09:12.887 Processing file lib/idxd/idxd_internal.h 00:09:12.887 Processing file lib/idxd/idxd_user.c 00:09:12.887 Processing file lib/idxd/idxd.c 00:09:13.147 Processing file lib/init/subsystem.c 00:09:13.147 Processing file lib/init/subsystem_rpc.c 00:09:13.147 Processing file lib/init/json_config.c 00:09:13.147 Processing file lib/init/rpc.c 00:09:13.147 Processing file lib/ioat/ioat_internal.h 00:09:13.147 Processing file lib/ioat/ioat.c 00:09:13.406 Processing file lib/iscsi/task.h 00:09:13.406 Processing file lib/iscsi/iscsi.h 00:09:13.406 Processing file lib/iscsi/tgt_node.c 00:09:13.406 Processing file lib/iscsi/conn.c 00:09:13.406 Processing file lib/iscsi/param.c 00:09:13.406 Processing file lib/iscsi/portal_grp.c 00:09:13.406 Processing file lib/iscsi/init_grp.c 00:09:13.406 Processing file lib/iscsi/md5.c 00:09:13.406 Processing file lib/iscsi/iscsi.c 00:09:13.406 Processing file lib/iscsi/iscsi_subsystem.c 00:09:13.406 Processing file lib/iscsi/task.c 00:09:13.406 Processing file lib/iscsi/iscsi_rpc.c 00:09:13.665 Processing file lib/json/json_parse.c 00:09:13.665 Processing file lib/json/json_write.c 00:09:13.665 Processing file lib/json/json_util.c 00:09:13.665 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:09:13.665 Processing file lib/jsonrpc/jsonrpc_server.c 00:09:13.665 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:09:13.665 Processing file lib/jsonrpc/jsonrpc_client.c 00:09:13.923 Processing file lib/log/log_flags.c 00:09:13.923 Processing file lib/log/log_deprecated.c 00:09:13.923 Processing file lib/log/log.c 00:09:13.923 Processing file lib/lvol/lvol.c 00:09:13.923 Processing file lib/nbd/nbd.c 00:09:13.923 Processing file lib/nbd/nbd_rpc.c 00:09:14.182 Processing file lib/notify/notify.c 00:09:14.182 Processing file lib/notify/notify_rpc.c 00:09:14.750 Processing file lib/nvme/nvme_discovery.c 00:09:14.750 Processing file lib/nvme/nvme_qpair.c 00:09:14.750 Processing file lib/nvme/nvme_ns.c 00:09:14.750 Processing file lib/nvme/nvme_quirks.c 00:09:14.750 Processing file lib/nvme/nvme_transport.c 00:09:14.750 Processing file lib/nvme/nvme_pcie_common.c 00:09:14.750 Processing file lib/nvme/nvme_poll_group.c 00:09:14.750 Processing file lib/nvme/nvme_opal.c 00:09:14.750 Processing file lib/nvme/nvme_pcie_internal.h 00:09:14.750 Processing file lib/nvme/nvme_ns_cmd.c 00:09:14.750 Processing file lib/nvme/nvme_io_msg.c 00:09:14.750 Processing file lib/nvme/nvme_pcie.c 00:09:14.750 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:09:14.750 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:09:14.750 Processing file lib/nvme/nvme_tcp.c 00:09:14.750 Processing file lib/nvme/nvme_rdma.c 00:09:14.750 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:09:14.750 Processing file lib/nvme/nvme_zns.c 00:09:14.750 Processing file lib/nvme/nvme.c 00:09:14.750 Processing file lib/nvme/nvme_cuse.c 00:09:14.750 Processing file lib/nvme/nvme_fabric.c 00:09:14.750 Processing file lib/nvme/nvme_vfio_user.c 00:09:14.750 Processing file lib/nvme/nvme_internal.h 00:09:14.750 Processing file lib/nvme/nvme_ctrlr.c 00:09:15.317 Processing file lib/nvmf/transport.c 00:09:15.317 Processing file lib/nvmf/rdma.c 00:09:15.317 Processing file lib/nvmf/ctrlr_bdev.c 00:09:15.317 Processing file lib/nvmf/nvmf_rpc.c 00:09:15.317 Processing file lib/nvmf/ctrlr.c 00:09:15.317 Processing file lib/nvmf/tcp.c 00:09:15.317 Processing file lib/nvmf/nvmf_internal.h 00:09:15.317 Processing file lib/nvmf/subsystem.c 00:09:15.317 Processing file lib/nvmf/ctrlr_discovery.c 00:09:15.317 Processing file lib/nvmf/nvmf.c 00:09:15.317 Processing file lib/rdma/common.c 00:09:15.317 Processing file lib/rdma/rdma_verbs.c 00:09:15.317 Processing file lib/rpc/rpc.c 00:09:15.575 Processing file lib/scsi/task.c 00:09:15.575 Processing file lib/scsi/scsi_rpc.c 00:09:15.575 Processing file lib/scsi/scsi.c 00:09:15.575 Processing file lib/scsi/scsi_pr.c 00:09:15.575 Processing file lib/scsi/dev.c 00:09:15.575 Processing file lib/scsi/lun.c 00:09:15.575 Processing file lib/scsi/scsi_bdev.c 00:09:15.575 Processing file lib/scsi/port.c 00:09:15.575 Processing file lib/sock/sock_rpc.c 00:09:15.575 Processing file lib/sock/sock.c 00:09:15.834 Processing file lib/thread/thread.c 00:09:15.834 Processing file lib/thread/iobuf.c 00:09:15.834 Processing file lib/trace/trace.c 00:09:15.834 Processing file lib/trace/trace_rpc.c 00:09:15.834 Processing file lib/trace/trace_flags.c 00:09:15.834 Processing file lib/trace_parser/trace.cpp 00:09:16.093 Processing file lib/ut/ut.c 00:09:16.093 Processing file lib/ut_mock/mock.c 00:09:16.351 Processing file lib/util/fd.c 00:09:16.351 Processing file lib/util/math.c 00:09:16.351 Processing file lib/util/hexlify.c 00:09:16.351 Processing file lib/util/uuid.c 00:09:16.351 Processing file lib/util/crc64.c 00:09:16.351 Processing file lib/util/crc16.c 00:09:16.351 Processing file lib/util/strerror_tls.c 00:09:16.351 Processing file lib/util/crc32c.c 00:09:16.351 Processing file lib/util/cpuset.c 00:09:16.351 Processing file lib/util/pipe.c 00:09:16.351 Processing file lib/util/bit_array.c 00:09:16.351 Processing file lib/util/file.c 00:09:16.351 Processing file lib/util/fd_group.c 00:09:16.351 Processing file lib/util/base64.c 00:09:16.351 Processing file lib/util/iov.c 00:09:16.351 Processing file lib/util/xor.c 00:09:16.351 Processing file lib/util/crc32_ieee.c 00:09:16.351 Processing file lib/util/string.c 00:09:16.351 Processing file lib/util/zipf.c 00:09:16.351 Processing file lib/util/dif.c 00:09:16.351 Processing file lib/util/crc32.c 00:09:16.610 Processing file lib/vfio_user/host/vfio_user.c 00:09:16.610 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:16.610 Processing file lib/vhost/vhost_internal.h 00:09:16.610 Processing file lib/vhost/rte_vhost_user.c 00:09:16.610 Processing file lib/vhost/vhost_blk.c 00:09:16.610 Processing file lib/vhost/vhost_rpc.c 00:09:16.610 Processing file lib/vhost/vhost.c 00:09:16.610 Processing file lib/vhost/vhost_scsi.c 00:09:16.868 Processing file lib/virtio/virtio_vfio_user.c 00:09:16.868 Processing file lib/virtio/virtio.c 00:09:16.868 Processing file lib/virtio/virtio_vhost_user.c 00:09:16.868 Processing file lib/virtio/virtio_pci.c 00:09:16.868 Processing file lib/vmd/led.c 00:09:16.868 Processing file lib/vmd/vmd.c 00:09:16.868 Processing file module/accel/dsa/accel_dsa.c 00:09:16.868 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:17.126 Processing file module/accel/error/accel_error_rpc.c 00:09:17.126 Processing file module/accel/error/accel_error.c 00:09:17.126 Processing file module/accel/iaa/accel_iaa.c 00:09:17.126 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:17.126 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:17.126 Processing file module/accel/ioat/accel_ioat.c 00:09:17.384 Processing file module/bdev/aio/bdev_aio.c 00:09:17.384 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:17.384 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:17.384 Processing file module/bdev/delay/vbdev_delay.c 00:09:17.384 Processing file module/bdev/error/vbdev_error.c 00:09:17.384 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:17.642 Processing file module/bdev/ftl/bdev_ftl.c 00:09:17.642 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:17.642 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:17.642 Processing file module/bdev/gpt/gpt.c 00:09:17.642 Processing file module/bdev/gpt/gpt.h 00:09:17.642 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:17.642 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:17.900 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:17.900 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:17.900 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:17.900 Processing file module/bdev/malloc/bdev_malloc.c 00:09:18.157 Processing file module/bdev/null/bdev_null_rpc.c 00:09:18.157 Processing file module/bdev/null/bdev_null.c 00:09:18.414 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:18.414 Processing file module/bdev/nvme/bdev_nvme.c 00:09:18.414 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:18.414 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:18.414 Processing file module/bdev/nvme/vbdev_opal.c 00:09:18.414 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:18.414 Processing file module/bdev/nvme/nvme_rpc.c 00:09:18.414 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:18.414 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:18.672 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:18.672 Processing file module/bdev/raid/raid5f.c 00:09:18.672 Processing file module/bdev/raid/bdev_raid.c 00:09:18.672 Processing file module/bdev/raid/raid0.c 00:09:18.672 Processing file module/bdev/raid/raid1.c 00:09:18.672 Processing file module/bdev/raid/bdev_raid.h 00:09:18.672 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:18.672 Processing file module/bdev/raid/concat.c 00:09:18.672 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:18.672 Processing file module/bdev/split/vbdev_split.c 00:09:18.929 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:18.929 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:18.930 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:18.930 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:18.930 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:18.930 Processing file module/blob/bdev/blob_bdev.c 00:09:18.930 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:18.930 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:19.187 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:19.187 Processing file module/event/subsystems/accel/accel.c 00:09:19.187 Processing file module/event/subsystems/bdev/bdev.c 00:09:19.187 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:19.187 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:19.445 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:19.445 Processing file module/event/subsystems/nbd/nbd.c 00:09:19.445 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:19.445 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:19.445 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:19.703 Processing file module/event/subsystems/scsi/scsi.c 00:09:19.703 Processing file module/event/subsystems/sock/sock.c 00:09:19.703 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:19.703 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:19.960 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:19.960 Processing file module/event/subsystems/vmd/vmd.c 00:09:19.960 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:19.960 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:19.960 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:20.219 Processing file module/sock/sock_kernel.h 00:09:20.219 Processing file module/sock/posix/posix.c 00:09:20.219 Writing directory view page. 00:09:20.219 Overall coverage rate: 00:09:20.219 lines......: 39.1% (39265 of 100422 lines) 00:09:20.219 functions..: 42.8% (3587 of 8384 functions) 00:09:20.219 00:09:20.219 00:09:20.219 ===================== 00:09:20.219 All unit tests passed 00:09:20.219 ===================== 00:09:20.219 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:20.219 16:26:51 -- unit/unittest.sh@302 -- # set +x 00:09:20.219 00:09:20.219 00:09:20.219 ************************************ 00:09:20.219 END TEST unittest 00:09:20.219 ************************************ 00:09:20.219 00:09:20.219 real 3m6.576s 00:09:20.219 user 2m37.033s 00:09:20.219 sys 0m20.208s 00:09:20.219 16:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.219 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.219 16:26:51 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:09:20.219 16:26:51 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:09:20.219 16:26:51 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:09:20.219 16:26:51 -- spdk/autotest.sh@173 -- # timing_enter lib 00:09:20.219 16:26:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:20.219 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.219 16:26:51 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:20.219 16:26:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:20.219 16:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:20.219 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.219 ************************************ 00:09:20.219 START TEST env 00:09:20.219 ************************************ 00:09:20.219 16:26:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:20.478 * Looking for test storage... 00:09:20.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:20.478 16:26:51 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:20.478 16:26:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:20.478 16:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:20.478 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.478 ************************************ 00:09:20.478 START TEST env_memory 00:09:20.478 ************************************ 00:09:20.478 16:26:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:20.478 00:09:20.478 00:09:20.478 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.478 http://cunit.sourceforge.net/ 00:09:20.478 00:09:20.478 00:09:20.478 Suite: memory 00:09:20.478 Test: alloc and free memory map ...[2024-07-13 16:26:51.819965] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:20.478 passed 00:09:20.478 Test: mem map translation ...[2024-07-13 16:26:51.874868] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:20.478 [2024-07-13 16:26:51.875117] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:20.478 [2024-07-13 16:26:51.875290] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:20.478 [2024-07-13 16:26:51.875505] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:20.478 passed 00:09:20.737 Test: mem map registration ...[2024-07-13 16:26:51.954804] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:20.737 [2024-07-13 16:26:51.954979] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:20.737 passed 00:09:20.737 Test: mem map adjacent registrations ...passed 00:09:20.737 00:09:20.737 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.737 suites 1 1 n/a 0 0 00:09:20.737 tests 4 4 4 0 0 00:09:20.737 asserts 152 152 152 0 n/a 00:09:20.737 00:09:20.737 Elapsed time = 0.258 seconds 00:09:20.737 00:09:20.737 real 0m0.299s 00:09:20.737 ************************************ 00:09:20.737 END TEST env_memory 00:09:20.737 user 0m0.270s 00:09:20.737 sys 0m0.029s 00:09:20.737 16:26:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.737 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:09:20.737 ************************************ 00:09:20.737 16:26:52 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:20.737 16:26:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:20.737 16:26:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:20.737 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:09:20.737 ************************************ 00:09:20.737 START TEST env_vtophys 00:09:20.737 ************************************ 00:09:20.737 16:26:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:20.737 EAL: lib.eal log level changed from notice to debug 00:09:20.737 EAL: Detected lcore 0 as core 0 on socket 0 00:09:20.737 EAL: Detected lcore 1 as core 0 on socket 0 00:09:20.737 EAL: Detected lcore 2 as core 0 on socket 0 00:09:20.737 EAL: Detected lcore 3 as core 0 on socket 0 00:09:20.737 EAL: Detected lcore 4 as core 0 on socket 0 00:09:20.737 EAL: Detected lcore 5 as core 0 on socket 0 00:09:20.737 EAL: Detected lcore 6 as core 0 on socket 0 00:09:20.737 EAL: Detected lcore 7 as core 0 on socket 0 00:09:20.737 EAL: Detected lcore 8 as core 0 on socket 0 00:09:20.737 EAL: Detected lcore 9 as core 0 on socket 0 00:09:20.737 EAL: Maximum logical cores by configuration: 128 00:09:20.737 EAL: Detected CPU lcores: 10 00:09:20.737 EAL: Detected NUMA nodes: 1 00:09:20.737 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:09:20.737 EAL: Checking presence of .so 'librte_eal.so.23' 00:09:20.737 EAL: Checking presence of .so 'librte_eal.so' 00:09:20.737 EAL: Detected static linkage of DPDK 00:09:20.737 EAL: No shared files mode enabled, IPC will be disabled 00:09:20.737 EAL: Selected IOVA mode 'PA' 00:09:20.737 EAL: Probing VFIO support... 00:09:20.737 EAL: IOMMU type 1 (Type 1) is supported 00:09:20.737 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:20.737 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:20.737 EAL: VFIO support initialized 00:09:20.737 EAL: Ask a virtual area of 0x2e000 bytes 00:09:20.737 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:20.737 EAL: Setting up physically contiguous memory... 00:09:20.737 EAL: Setting maximum number of open files to 1048576 00:09:20.737 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:20.737 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:20.737 EAL: Ask a virtual area of 0x61000 bytes 00:09:20.737 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:20.738 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:20.738 EAL: Ask a virtual area of 0x400000000 bytes 00:09:20.738 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:20.738 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:20.738 EAL: Ask a virtual area of 0x61000 bytes 00:09:20.738 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:20.738 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:20.738 EAL: Ask a virtual area of 0x400000000 bytes 00:09:20.738 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:20.738 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:20.738 EAL: Ask a virtual area of 0x61000 bytes 00:09:20.738 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:20.738 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:20.738 EAL: Ask a virtual area of 0x400000000 bytes 00:09:20.738 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:20.738 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:20.738 EAL: Ask a virtual area of 0x61000 bytes 00:09:20.738 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:20.738 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:20.738 EAL: Ask a virtual area of 0x400000000 bytes 00:09:20.738 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:20.738 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:20.738 EAL: Hugepages will be freed exactly as allocated. 00:09:20.738 EAL: No shared files mode enabled, IPC is disabled 00:09:20.738 EAL: No shared files mode enabled, IPC is disabled 00:09:20.996 EAL: TSC frequency is ~2100000 KHz 00:09:20.996 EAL: Main lcore 0 is ready (tid=7f0d6afb1a80;cpuset=[0]) 00:09:20.996 EAL: Trying to obtain current memory policy. 00:09:20.996 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:20.996 EAL: Restoring previous memory policy: 0 00:09:20.996 EAL: request: mp_malloc_sync 00:09:20.996 EAL: No shared files mode enabled, IPC is disabled 00:09:20.996 EAL: Heap on socket 0 was expanded by 2MB 00:09:20.996 EAL: No shared files mode enabled, IPC is disabled 00:09:20.996 EAL: Mem event callback 'spdk:(nil)' registered 00:09:20.996 00:09:20.996 00:09:20.996 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.996 http://cunit.sourceforge.net/ 00:09:20.996 00:09:20.996 00:09:20.996 Suite: components_suite 00:09:21.563 Test: vtophys_malloc_test ...passed 00:09:21.563 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:21.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.563 EAL: Restoring previous memory policy: 0 00:09:21.563 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.563 EAL: request: mp_malloc_sync 00:09:21.563 EAL: No shared files mode enabled, IPC is disabled 00:09:21.563 EAL: Heap on socket 0 was expanded by 4MB 00:09:21.563 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.563 EAL: request: mp_malloc_sync 00:09:21.563 EAL: No shared files mode enabled, IPC is disabled 00:09:21.563 EAL: Heap on socket 0 was shrunk by 4MB 00:09:21.563 EAL: Trying to obtain current memory policy. 00:09:21.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.563 EAL: Restoring previous memory policy: 0 00:09:21.563 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.563 EAL: request: mp_malloc_sync 00:09:21.563 EAL: No shared files mode enabled, IPC is disabled 00:09:21.563 EAL: Heap on socket 0 was expanded by 6MB 00:09:21.563 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.563 EAL: request: mp_malloc_sync 00:09:21.563 EAL: No shared files mode enabled, IPC is disabled 00:09:21.563 EAL: Heap on socket 0 was shrunk by 6MB 00:09:21.563 EAL: Trying to obtain current memory policy. 00:09:21.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.563 EAL: Restoring previous memory policy: 0 00:09:21.563 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.563 EAL: request: mp_malloc_sync 00:09:21.563 EAL: No shared files mode enabled, IPC is disabled 00:09:21.563 EAL: Heap on socket 0 was expanded by 10MB 00:09:21.563 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.563 EAL: request: mp_malloc_sync 00:09:21.563 EAL: No shared files mode enabled, IPC is disabled 00:09:21.563 EAL: Heap on socket 0 was shrunk by 10MB 00:09:21.563 EAL: Trying to obtain current memory policy. 00:09:21.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.563 EAL: Restoring previous memory policy: 0 00:09:21.563 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.563 EAL: request: mp_malloc_sync 00:09:21.563 EAL: No shared files mode enabled, IPC is disabled 00:09:21.563 EAL: Heap on socket 0 was expanded by 18MB 00:09:21.563 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.563 EAL: request: mp_malloc_sync 00:09:21.563 EAL: No shared files mode enabled, IPC is disabled 00:09:21.563 EAL: Heap on socket 0 was shrunk by 18MB 00:09:21.563 EAL: Trying to obtain current memory policy. 00:09:21.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.563 EAL: Restoring previous memory policy: 0 00:09:21.563 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.563 EAL: request: mp_malloc_sync 00:09:21.563 EAL: No shared files mode enabled, IPC is disabled 00:09:21.563 EAL: Heap on socket 0 was expanded by 34MB 00:09:21.563 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.563 EAL: request: mp_malloc_sync 00:09:21.563 EAL: No shared files mode enabled, IPC is disabled 00:09:21.563 EAL: Heap on socket 0 was shrunk by 34MB 00:09:21.563 EAL: Trying to obtain current memory policy. 00:09:21.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.563 EAL: Restoring previous memory policy: 0 00:09:21.563 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.563 EAL: request: mp_malloc_sync 00:09:21.563 EAL: No shared files mode enabled, IPC is disabled 00:09:21.563 EAL: Heap on socket 0 was expanded by 66MB 00:09:21.563 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.820 EAL: request: mp_malloc_sync 00:09:21.821 EAL: No shared files mode enabled, IPC is disabled 00:09:21.821 EAL: Heap on socket 0 was shrunk by 66MB 00:09:21.821 EAL: Trying to obtain current memory policy. 00:09:21.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.821 EAL: Restoring previous memory policy: 0 00:09:21.821 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.821 EAL: request: mp_malloc_sync 00:09:21.821 EAL: No shared files mode enabled, IPC is disabled 00:09:21.821 EAL: Heap on socket 0 was expanded by 130MB 00:09:21.821 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.821 EAL: request: mp_malloc_sync 00:09:21.821 EAL: No shared files mode enabled, IPC is disabled 00:09:21.821 EAL: Heap on socket 0 was shrunk by 130MB 00:09:21.821 EAL: Trying to obtain current memory policy. 00:09:21.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:21.821 EAL: Restoring previous memory policy: 0 00:09:21.821 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.821 EAL: request: mp_malloc_sync 00:09:21.821 EAL: No shared files mode enabled, IPC is disabled 00:09:21.821 EAL: Heap on socket 0 was expanded by 258MB 00:09:22.078 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.078 EAL: request: mp_malloc_sync 00:09:22.078 EAL: No shared files mode enabled, IPC is disabled 00:09:22.078 EAL: Heap on socket 0 was shrunk by 258MB 00:09:22.078 EAL: Trying to obtain current memory policy. 00:09:22.078 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:22.336 EAL: Restoring previous memory policy: 0 00:09:22.336 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.336 EAL: request: mp_malloc_sync 00:09:22.336 EAL: No shared files mode enabled, IPC is disabled 00:09:22.336 EAL: Heap on socket 0 was expanded by 514MB 00:09:22.336 EAL: Calling mem event callback 'spdk:(nil)' 00:09:22.593 EAL: request: mp_malloc_sync 00:09:22.593 EAL: No shared files mode enabled, IPC is disabled 00:09:22.593 EAL: Heap on socket 0 was shrunk by 514MB 00:09:22.593 EAL: Trying to obtain current memory policy. 00:09:22.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:23.157 EAL: Restoring previous memory policy: 0 00:09:23.157 EAL: Calling mem event callback 'spdk:(nil)' 00:09:23.157 EAL: request: mp_malloc_sync 00:09:23.157 EAL: No shared files mode enabled, IPC is disabled 00:09:23.157 EAL: Heap on socket 0 was expanded by 1026MB 00:09:23.414 EAL: Calling mem event callback 'spdk:(nil)' 00:09:23.771 EAL: request: mp_malloc_sync 00:09:23.771 EAL: No shared files mode enabled, IPC is disabled 00:09:23.771 passed 00:09:23.771 00:09:23.771 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:23.771 Run Summary: Type Total Ran Passed Failed Inactive 00:09:23.771 suites 1 1 n/a 0 0 00:09:23.771 tests 2 2 2 0 0 00:09:23.771 asserts 6331 6331 6331 0 n/a 00:09:23.771 00:09:23.771 Elapsed time = 2.578 seconds 00:09:23.771 EAL: Calling mem event callback 'spdk:(nil)' 00:09:23.771 EAL: request: mp_malloc_sync 00:09:23.771 EAL: No shared files mode enabled, IPC is disabled 00:09:23.771 EAL: Heap on socket 0 was shrunk by 2MB 00:09:23.771 EAL: No shared files mode enabled, IPC is disabled 00:09:23.771 EAL: No shared files mode enabled, IPC is disabled 00:09:23.771 EAL: No shared files mode enabled, IPC is disabled 00:09:23.771 ************************************ 00:09:23.771 END TEST env_vtophys 00:09:23.771 ************************************ 00:09:23.771 00:09:23.771 real 0m2.872s 00:09:23.771 user 0m1.444s 00:09:23.771 sys 0m1.270s 00:09:23.771 16:26:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:23.771 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:09:23.771 16:26:55 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:23.771 16:26:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:23.771 16:26:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:23.771 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:09:23.771 ************************************ 00:09:23.771 START TEST env_pci 00:09:23.771 ************************************ 00:09:23.771 16:26:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:23.771 00:09:23.771 00:09:23.771 CUnit - A unit testing framework for C - Version 2.1-3 00:09:23.771 http://cunit.sourceforge.net/ 00:09:23.771 00:09:23.771 00:09:23.771 Suite: pci 00:09:23.771 Test: pci_hook ...[2024-07-13 16:26:55.086903] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 114866 has claimed it 00:09:23.771 EAL: Cannot find device (10000:00:01.0) 00:09:23.771 EAL: Failed to attach device on primary process 00:09:23.771 passed 00:09:23.771 00:09:23.771 Run Summary: Type Total Ran Passed Failed Inactive 00:09:23.771 suites 1 1 n/a 0 0 00:09:23.771 tests 1 1 1 0 0 00:09:23.772 asserts 25 25 25 0 n/a 00:09:23.772 00:09:23.772 Elapsed time = 0.008 seconds 00:09:23.772 00:09:23.772 real 0m0.088s 00:09:23.772 user 0m0.034s 00:09:23.772 sys 0m0.051s 00:09:23.772 16:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:23.772 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:09:23.772 ************************************ 00:09:23.772 END TEST env_pci 00:09:23.772 ************************************ 00:09:24.030 16:26:55 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:24.030 16:26:55 -- env/env.sh@15 -- # uname 00:09:24.030 16:26:55 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:24.030 16:26:55 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:24.030 16:26:55 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:24.030 16:26:55 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:24.030 16:26:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.030 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:09:24.030 ************************************ 00:09:24.030 START TEST env_dpdk_post_init 00:09:24.030 ************************************ 00:09:24.030 16:26:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:24.030 EAL: Detected CPU lcores: 10 00:09:24.030 EAL: Detected NUMA nodes: 1 00:09:24.030 EAL: Detected static linkage of DPDK 00:09:24.030 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:24.030 EAL: Selected IOVA mode 'PA' 00:09:24.030 EAL: VFIO support initialized 00:09:24.030 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:24.030 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:09:24.030 Starting DPDK initialization... 00:09:24.030 Starting SPDK post initialization... 00:09:24.030 SPDK NVMe probe 00:09:24.030 Attaching to 0000:00:06.0 00:09:24.030 Attached to 0000:00:06.0 00:09:24.030 Cleaning up... 00:09:24.030 00:09:24.030 real 0m0.254s 00:09:24.030 user 0m0.052s 00:09:24.030 sys 0m0.103s 00:09:24.030 16:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.030 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:09:24.030 ************************************ 00:09:24.030 END TEST env_dpdk_post_init 00:09:24.030 ************************************ 00:09:24.288 16:26:55 -- env/env.sh@26 -- # uname 00:09:24.288 16:26:55 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:24.288 16:26:55 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:24.288 16:26:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:24.288 16:26:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.288 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:09:24.288 ************************************ 00:09:24.288 START TEST env_mem_callbacks 00:09:24.288 ************************************ 00:09:24.288 16:26:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:24.288 EAL: Detected CPU lcores: 10 00:09:24.288 EAL: Detected NUMA nodes: 1 00:09:24.288 EAL: Detected static linkage of DPDK 00:09:24.288 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:24.288 EAL: Selected IOVA mode 'PA' 00:09:24.288 EAL: VFIO support initialized 00:09:24.288 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:24.288 00:09:24.288 00:09:24.288 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.288 http://cunit.sourceforge.net/ 00:09:24.288 00:09:24.288 00:09:24.288 Suite: memory 00:09:24.288 Test: test ... 00:09:24.288 register 0x200000200000 2097152 00:09:24.288 malloc 3145728 00:09:24.288 register 0x200000400000 4194304 00:09:24.288 buf 0x200000500000 len 3145728 PASSED 00:09:24.288 malloc 64 00:09:24.288 buf 0x2000004fff40 len 64 PASSED 00:09:24.288 malloc 4194304 00:09:24.288 register 0x200000800000 6291456 00:09:24.288 buf 0x200000a00000 len 4194304 PASSED 00:09:24.288 free 0x200000500000 3145728 00:09:24.288 free 0x2000004fff40 64 00:09:24.288 unregister 0x200000400000 4194304 PASSED 00:09:24.288 free 0x200000a00000 4194304 00:09:24.288 unregister 0x200000800000 6291456 PASSED 00:09:24.288 malloc 8388608 00:09:24.288 register 0x200000400000 10485760 00:09:24.288 buf 0x200000600000 len 8388608 PASSED 00:09:24.288 free 0x200000600000 8388608 00:09:24.288 unregister 0x200000400000 10485760 PASSED 00:09:24.288 passed 00:09:24.288 00:09:24.288 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.288 suites 1 1 n/a 0 0 00:09:24.288 tests 1 1 1 0 0 00:09:24.288 asserts 15 15 15 0 n/a 00:09:24.288 00:09:24.288 Elapsed time = 0.008 seconds 00:09:24.288 00:09:24.288 real 0m0.202s 00:09:24.288 user 0m0.055s 00:09:24.288 sys 0m0.044s 00:09:24.288 16:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.288 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:09:24.288 ************************************ 00:09:24.288 END TEST env_mem_callbacks 00:09:24.288 ************************************ 00:09:24.546 ************************************ 00:09:24.546 END TEST env 00:09:24.546 ************************************ 00:09:24.546 00:09:24.546 real 0m4.158s 00:09:24.546 user 0m2.046s 00:09:24.546 sys 0m1.755s 00:09:24.546 16:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.546 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:09:24.546 16:26:55 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:24.546 16:26:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:24.546 16:26:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.546 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:09:24.546 ************************************ 00:09:24.546 START TEST rpc 00:09:24.546 ************************************ 00:09:24.546 16:26:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:24.546 * Looking for test storage... 00:09:24.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:24.546 16:26:55 -- rpc/rpc.sh@65 -- # spdk_pid=114995 00:09:24.546 16:26:55 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:24.546 16:26:55 -- rpc/rpc.sh@67 -- # waitforlisten 114995 00:09:24.546 16:26:55 -- common/autotest_common.sh@819 -- # '[' -z 114995 ']' 00:09:24.546 16:26:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.546 16:26:55 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:24.546 16:26:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:24.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.546 16:26:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.546 16:26:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:24.546 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:09:24.805 [2024-07-13 16:26:56.066571] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:24.805 [2024-07-13 16:26:56.067035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114995 ] 00:09:24.805 [2024-07-13 16:26:56.216543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.063 [2024-07-13 16:26:56.292247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:25.063 [2024-07-13 16:26:56.292749] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:25.063 [2024-07-13 16:26:56.292891] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 114995' to capture a snapshot of events at runtime. 00:09:25.063 [2024-07-13 16:26:56.293012] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid114995 for offline analysis/debug. 00:09:25.063 [2024-07-13 16:26:56.293139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.628 16:26:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:25.628 16:26:56 -- common/autotest_common.sh@852 -- # return 0 00:09:25.628 16:26:56 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:25.628 16:26:56 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:25.628 16:26:56 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:25.628 16:26:56 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:25.628 16:26:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:25.628 16:26:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:25.628 16:26:56 -- common/autotest_common.sh@10 -- # set +x 00:09:25.628 ************************************ 00:09:25.628 START TEST rpc_integrity 00:09:25.628 ************************************ 00:09:25.628 16:26:56 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:09:25.628 16:26:56 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:25.628 16:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:25.628 16:26:56 -- common/autotest_common.sh@10 -- # set +x 00:09:25.628 16:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:25.628 16:26:56 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:25.628 16:26:56 -- rpc/rpc.sh@13 -- # jq length 00:09:25.628 16:26:57 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:25.628 16:26:57 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:25.628 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:25.628 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:25.628 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:25.628 16:26:57 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:25.628 16:26:57 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:25.628 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:25.628 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:25.628 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:25.628 16:26:57 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:25.628 { 00:09:25.628 "name": "Malloc0", 00:09:25.628 "aliases": [ 00:09:25.628 "7b4ae241-9998-4b2f-90d4-2b680d15ef03" 00:09:25.628 ], 00:09:25.628 "product_name": "Malloc disk", 00:09:25.628 "block_size": 512, 00:09:25.628 "num_blocks": 16384, 00:09:25.628 "uuid": "7b4ae241-9998-4b2f-90d4-2b680d15ef03", 00:09:25.628 "assigned_rate_limits": { 00:09:25.628 "rw_ios_per_sec": 0, 00:09:25.628 "rw_mbytes_per_sec": 0, 00:09:25.628 "r_mbytes_per_sec": 0, 00:09:25.628 "w_mbytes_per_sec": 0 00:09:25.628 }, 00:09:25.628 "claimed": false, 00:09:25.628 "zoned": false, 00:09:25.628 "supported_io_types": { 00:09:25.628 "read": true, 00:09:25.628 "write": true, 00:09:25.628 "unmap": true, 00:09:25.628 "write_zeroes": true, 00:09:25.628 "flush": true, 00:09:25.628 "reset": true, 00:09:25.628 "compare": false, 00:09:25.628 "compare_and_write": false, 00:09:25.628 "abort": true, 00:09:25.628 "nvme_admin": false, 00:09:25.628 "nvme_io": false 00:09:25.628 }, 00:09:25.628 "memory_domains": [ 00:09:25.628 { 00:09:25.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.628 "dma_device_type": 2 00:09:25.628 } 00:09:25.628 ], 00:09:25.628 "driver_specific": {} 00:09:25.628 } 00:09:25.628 ]' 00:09:25.628 16:26:57 -- rpc/rpc.sh@17 -- # jq length 00:09:25.886 16:26:57 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:25.886 16:26:57 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:25.886 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:25.886 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:25.886 [2024-07-13 16:26:57.117672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:25.886 [2024-07-13 16:26:57.117897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.886 [2024-07-13 16:26:57.118004] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006080 00:09:25.886 [2024-07-13 16:26:57.118196] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.886 [2024-07-13 16:26:57.121241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.886 [2024-07-13 16:26:57.121420] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:25.886 Passthru0 00:09:25.886 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:25.886 16:26:57 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:25.886 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:25.887 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:25.887 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:25.887 16:26:57 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:25.887 { 00:09:25.887 "name": "Malloc0", 00:09:25.887 "aliases": [ 00:09:25.887 "7b4ae241-9998-4b2f-90d4-2b680d15ef03" 00:09:25.887 ], 00:09:25.887 "product_name": "Malloc disk", 00:09:25.887 "block_size": 512, 00:09:25.887 "num_blocks": 16384, 00:09:25.887 "uuid": "7b4ae241-9998-4b2f-90d4-2b680d15ef03", 00:09:25.887 "assigned_rate_limits": { 00:09:25.887 "rw_ios_per_sec": 0, 00:09:25.887 "rw_mbytes_per_sec": 0, 00:09:25.887 "r_mbytes_per_sec": 0, 00:09:25.887 "w_mbytes_per_sec": 0 00:09:25.887 }, 00:09:25.887 "claimed": true, 00:09:25.887 "claim_type": "exclusive_write", 00:09:25.887 "zoned": false, 00:09:25.887 "supported_io_types": { 00:09:25.887 "read": true, 00:09:25.887 "write": true, 00:09:25.887 "unmap": true, 00:09:25.887 "write_zeroes": true, 00:09:25.887 "flush": true, 00:09:25.887 "reset": true, 00:09:25.887 "compare": false, 00:09:25.887 "compare_and_write": false, 00:09:25.887 "abort": true, 00:09:25.887 "nvme_admin": false, 00:09:25.887 "nvme_io": false 00:09:25.887 }, 00:09:25.887 "memory_domains": [ 00:09:25.887 { 00:09:25.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.887 "dma_device_type": 2 00:09:25.887 } 00:09:25.887 ], 00:09:25.887 "driver_specific": {} 00:09:25.887 }, 00:09:25.887 { 00:09:25.887 "name": "Passthru0", 00:09:25.887 "aliases": [ 00:09:25.887 "98111757-8b5c-5751-8c59-2013501fe93d" 00:09:25.887 ], 00:09:25.887 "product_name": "passthru", 00:09:25.887 "block_size": 512, 00:09:25.887 "num_blocks": 16384, 00:09:25.887 "uuid": "98111757-8b5c-5751-8c59-2013501fe93d", 00:09:25.887 "assigned_rate_limits": { 00:09:25.887 "rw_ios_per_sec": 0, 00:09:25.887 "rw_mbytes_per_sec": 0, 00:09:25.887 "r_mbytes_per_sec": 0, 00:09:25.887 "w_mbytes_per_sec": 0 00:09:25.887 }, 00:09:25.887 "claimed": false, 00:09:25.887 "zoned": false, 00:09:25.887 "supported_io_types": { 00:09:25.887 "read": true, 00:09:25.887 "write": true, 00:09:25.887 "unmap": true, 00:09:25.887 "write_zeroes": true, 00:09:25.887 "flush": true, 00:09:25.887 "reset": true, 00:09:25.887 "compare": false, 00:09:25.887 "compare_and_write": false, 00:09:25.887 "abort": true, 00:09:25.887 "nvme_admin": false, 00:09:25.887 "nvme_io": false 00:09:25.887 }, 00:09:25.887 "memory_domains": [ 00:09:25.887 { 00:09:25.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.887 "dma_device_type": 2 00:09:25.887 } 00:09:25.887 ], 00:09:25.887 "driver_specific": { 00:09:25.887 "passthru": { 00:09:25.887 "name": "Passthru0", 00:09:25.887 "base_bdev_name": "Malloc0" 00:09:25.887 } 00:09:25.887 } 00:09:25.887 } 00:09:25.887 ]' 00:09:25.887 16:26:57 -- rpc/rpc.sh@21 -- # jq length 00:09:25.887 16:26:57 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:25.887 16:26:57 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:25.887 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:25.887 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:25.887 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:25.887 16:26:57 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:25.887 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:25.887 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:25.887 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:25.887 16:26:57 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:25.887 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:25.887 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:25.887 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:25.887 16:26:57 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:25.887 16:26:57 -- rpc/rpc.sh@26 -- # jq length 00:09:25.887 16:26:57 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:25.887 00:09:25.887 real 0m0.273s 00:09:25.887 user 0m0.158s 00:09:25.887 sys 0m0.041s 00:09:25.887 16:26:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.887 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:25.887 ************************************ 00:09:25.887 END TEST rpc_integrity 00:09:25.887 ************************************ 00:09:25.887 16:26:57 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:25.887 16:26:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:25.887 16:26:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:25.887 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:25.887 ************************************ 00:09:25.887 START TEST rpc_plugins 00:09:25.887 ************************************ 00:09:25.887 16:26:57 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:09:25.887 16:26:57 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:25.887 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:25.887 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:25.887 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:25.887 16:26:57 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:25.887 16:26:57 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:25.887 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:25.887 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:25.887 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:25.887 16:26:57 -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:25.887 { 00:09:25.887 "name": "Malloc1", 00:09:25.887 "aliases": [ 00:09:25.887 "ba200b8e-4d77-400f-be9e-0bb623876363" 00:09:25.887 ], 00:09:25.887 "product_name": "Malloc disk", 00:09:25.887 "block_size": 4096, 00:09:25.887 "num_blocks": 256, 00:09:25.887 "uuid": "ba200b8e-4d77-400f-be9e-0bb623876363", 00:09:25.887 "assigned_rate_limits": { 00:09:25.887 "rw_ios_per_sec": 0, 00:09:25.887 "rw_mbytes_per_sec": 0, 00:09:25.887 "r_mbytes_per_sec": 0, 00:09:25.887 "w_mbytes_per_sec": 0 00:09:25.887 }, 00:09:25.887 "claimed": false, 00:09:25.887 "zoned": false, 00:09:25.887 "supported_io_types": { 00:09:25.887 "read": true, 00:09:25.887 "write": true, 00:09:25.887 "unmap": true, 00:09:25.887 "write_zeroes": true, 00:09:25.887 "flush": true, 00:09:25.887 "reset": true, 00:09:25.887 "compare": false, 00:09:25.887 "compare_and_write": false, 00:09:25.887 "abort": true, 00:09:25.887 "nvme_admin": false, 00:09:25.887 "nvme_io": false 00:09:25.887 }, 00:09:25.887 "memory_domains": [ 00:09:25.887 { 00:09:25.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.887 "dma_device_type": 2 00:09:25.887 } 00:09:25.887 ], 00:09:25.887 "driver_specific": {} 00:09:25.887 } 00:09:25.887 ]' 00:09:25.887 16:26:57 -- rpc/rpc.sh@32 -- # jq length 00:09:26.146 16:26:57 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:26.146 16:26:57 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:26.146 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.146 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.146 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.146 16:26:57 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:26.146 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.146 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.146 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.146 16:26:57 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:26.146 16:26:57 -- rpc/rpc.sh@36 -- # jq length 00:09:26.146 16:26:57 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:26.146 00:09:26.146 real 0m0.149s 00:09:26.146 user 0m0.094s 00:09:26.146 sys 0m0.021s 00:09:26.146 16:26:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.146 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.146 ************************************ 00:09:26.146 END TEST rpc_plugins 00:09:26.146 ************************************ 00:09:26.146 16:26:57 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:26.146 16:26:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:26.146 16:26:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:26.146 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.146 ************************************ 00:09:26.146 START TEST rpc_trace_cmd_test 00:09:26.146 ************************************ 00:09:26.146 16:26:57 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:09:26.146 16:26:57 -- rpc/rpc.sh@40 -- # local info 00:09:26.146 16:26:57 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:26.146 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.146 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.146 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.146 16:26:57 -- rpc/rpc.sh@42 -- # info='{ 00:09:26.146 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid114995", 00:09:26.146 "tpoint_group_mask": "0x8", 00:09:26.146 "iscsi_conn": { 00:09:26.146 "mask": "0x2", 00:09:26.146 "tpoint_mask": "0x0" 00:09:26.146 }, 00:09:26.146 "scsi": { 00:09:26.146 "mask": "0x4", 00:09:26.146 "tpoint_mask": "0x0" 00:09:26.146 }, 00:09:26.146 "bdev": { 00:09:26.146 "mask": "0x8", 00:09:26.146 "tpoint_mask": "0xffffffffffffffff" 00:09:26.146 }, 00:09:26.146 "nvmf_rdma": { 00:09:26.146 "mask": "0x10", 00:09:26.146 "tpoint_mask": "0x0" 00:09:26.146 }, 00:09:26.146 "nvmf_tcp": { 00:09:26.146 "mask": "0x20", 00:09:26.146 "tpoint_mask": "0x0" 00:09:26.146 }, 00:09:26.146 "ftl": { 00:09:26.146 "mask": "0x40", 00:09:26.146 "tpoint_mask": "0x0" 00:09:26.146 }, 00:09:26.146 "blobfs": { 00:09:26.146 "mask": "0x80", 00:09:26.146 "tpoint_mask": "0x0" 00:09:26.146 }, 00:09:26.146 "dsa": { 00:09:26.146 "mask": "0x200", 00:09:26.146 "tpoint_mask": "0x0" 00:09:26.146 }, 00:09:26.146 "thread": { 00:09:26.146 "mask": "0x400", 00:09:26.146 "tpoint_mask": "0x0" 00:09:26.146 }, 00:09:26.146 "nvme_pcie": { 00:09:26.146 "mask": "0x800", 00:09:26.146 "tpoint_mask": "0x0" 00:09:26.146 }, 00:09:26.146 "iaa": { 00:09:26.146 "mask": "0x1000", 00:09:26.146 "tpoint_mask": "0x0" 00:09:26.146 }, 00:09:26.146 "nvme_tcp": { 00:09:26.146 "mask": "0x2000", 00:09:26.146 "tpoint_mask": "0x0" 00:09:26.146 }, 00:09:26.146 "bdev_nvme": { 00:09:26.146 "mask": "0x4000", 00:09:26.146 "tpoint_mask": "0x0" 00:09:26.146 } 00:09:26.146 }' 00:09:26.146 16:26:57 -- rpc/rpc.sh@43 -- # jq length 00:09:26.146 16:26:57 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:09:26.146 16:26:57 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:26.404 16:26:57 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:26.404 16:26:57 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:26.404 16:26:57 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:26.404 16:26:57 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:26.404 16:26:57 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:26.404 16:26:57 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:26.404 16:26:57 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:26.404 00:09:26.404 real 0m0.254s 00:09:26.404 user 0m0.203s 00:09:26.404 sys 0m0.041s 00:09:26.404 16:26:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.404 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.404 ************************************ 00:09:26.404 END TEST rpc_trace_cmd_test 00:09:26.404 ************************************ 00:09:26.404 16:26:57 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:26.404 16:26:57 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:26.404 16:26:57 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:26.404 16:26:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:26.404 16:26:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:26.404 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.404 ************************************ 00:09:26.404 START TEST rpc_daemon_integrity 00:09:26.404 ************************************ 00:09:26.404 16:26:57 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:09:26.404 16:26:57 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:26.404 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.404 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.404 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.404 16:26:57 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:26.404 16:26:57 -- rpc/rpc.sh@13 -- # jq length 00:09:26.661 16:26:57 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:26.661 16:26:57 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:26.661 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.661 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.661 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.661 16:26:57 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:26.661 16:26:57 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:26.661 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.661 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.661 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.661 16:26:57 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:26.661 { 00:09:26.661 "name": "Malloc2", 00:09:26.661 "aliases": [ 00:09:26.661 "772101e9-5ab1-468d-88e0-4ac8cad19370" 00:09:26.661 ], 00:09:26.661 "product_name": "Malloc disk", 00:09:26.661 "block_size": 512, 00:09:26.661 "num_blocks": 16384, 00:09:26.661 "uuid": "772101e9-5ab1-468d-88e0-4ac8cad19370", 00:09:26.661 "assigned_rate_limits": { 00:09:26.661 "rw_ios_per_sec": 0, 00:09:26.661 "rw_mbytes_per_sec": 0, 00:09:26.661 "r_mbytes_per_sec": 0, 00:09:26.661 "w_mbytes_per_sec": 0 00:09:26.661 }, 00:09:26.661 "claimed": false, 00:09:26.661 "zoned": false, 00:09:26.661 "supported_io_types": { 00:09:26.661 "read": true, 00:09:26.661 "write": true, 00:09:26.661 "unmap": true, 00:09:26.661 "write_zeroes": true, 00:09:26.661 "flush": true, 00:09:26.661 "reset": true, 00:09:26.661 "compare": false, 00:09:26.661 "compare_and_write": false, 00:09:26.661 "abort": true, 00:09:26.661 "nvme_admin": false, 00:09:26.661 "nvme_io": false 00:09:26.661 }, 00:09:26.661 "memory_domains": [ 00:09:26.661 { 00:09:26.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.661 "dma_device_type": 2 00:09:26.661 } 00:09:26.661 ], 00:09:26.661 "driver_specific": {} 00:09:26.661 } 00:09:26.661 ]' 00:09:26.661 16:26:57 -- rpc/rpc.sh@17 -- # jq length 00:09:26.661 16:26:57 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:26.661 16:26:57 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:26.661 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.661 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.661 [2024-07-13 16:26:57.953224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:26.661 [2024-07-13 16:26:57.953431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.661 [2024-07-13 16:26:57.953510] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:26.661 [2024-07-13 16:26:57.953629] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.661 [2024-07-13 16:26:57.956385] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.661 [2024-07-13 16:26:57.956552] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:26.661 Passthru0 00:09:26.661 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.661 16:26:57 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:26.661 16:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.661 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.661 16:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.661 16:26:57 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:26.661 { 00:09:26.661 "name": "Malloc2", 00:09:26.661 "aliases": [ 00:09:26.661 "772101e9-5ab1-468d-88e0-4ac8cad19370" 00:09:26.661 ], 00:09:26.661 "product_name": "Malloc disk", 00:09:26.661 "block_size": 512, 00:09:26.661 "num_blocks": 16384, 00:09:26.661 "uuid": "772101e9-5ab1-468d-88e0-4ac8cad19370", 00:09:26.661 "assigned_rate_limits": { 00:09:26.661 "rw_ios_per_sec": 0, 00:09:26.661 "rw_mbytes_per_sec": 0, 00:09:26.661 "r_mbytes_per_sec": 0, 00:09:26.661 "w_mbytes_per_sec": 0 00:09:26.661 }, 00:09:26.661 "claimed": true, 00:09:26.661 "claim_type": "exclusive_write", 00:09:26.661 "zoned": false, 00:09:26.661 "supported_io_types": { 00:09:26.661 "read": true, 00:09:26.661 "write": true, 00:09:26.661 "unmap": true, 00:09:26.661 "write_zeroes": true, 00:09:26.661 "flush": true, 00:09:26.661 "reset": true, 00:09:26.661 "compare": false, 00:09:26.661 "compare_and_write": false, 00:09:26.661 "abort": true, 00:09:26.662 "nvme_admin": false, 00:09:26.662 "nvme_io": false 00:09:26.662 }, 00:09:26.662 "memory_domains": [ 00:09:26.662 { 00:09:26.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.662 "dma_device_type": 2 00:09:26.662 } 00:09:26.662 ], 00:09:26.662 "driver_specific": {} 00:09:26.662 }, 00:09:26.662 { 00:09:26.662 "name": "Passthru0", 00:09:26.662 "aliases": [ 00:09:26.662 "72c515dc-1361-593e-b5b6-e3cbccb50728" 00:09:26.662 ], 00:09:26.662 "product_name": "passthru", 00:09:26.662 "block_size": 512, 00:09:26.662 "num_blocks": 16384, 00:09:26.662 "uuid": "72c515dc-1361-593e-b5b6-e3cbccb50728", 00:09:26.662 "assigned_rate_limits": { 00:09:26.662 "rw_ios_per_sec": 0, 00:09:26.662 "rw_mbytes_per_sec": 0, 00:09:26.662 "r_mbytes_per_sec": 0, 00:09:26.662 "w_mbytes_per_sec": 0 00:09:26.662 }, 00:09:26.662 "claimed": false, 00:09:26.662 "zoned": false, 00:09:26.662 "supported_io_types": { 00:09:26.662 "read": true, 00:09:26.662 "write": true, 00:09:26.662 "unmap": true, 00:09:26.662 "write_zeroes": true, 00:09:26.662 "flush": true, 00:09:26.662 "reset": true, 00:09:26.662 "compare": false, 00:09:26.662 "compare_and_write": false, 00:09:26.662 "abort": true, 00:09:26.662 "nvme_admin": false, 00:09:26.662 "nvme_io": false 00:09:26.662 }, 00:09:26.662 "memory_domains": [ 00:09:26.662 { 00:09:26.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.662 "dma_device_type": 2 00:09:26.662 } 00:09:26.662 ], 00:09:26.662 "driver_specific": { 00:09:26.662 "passthru": { 00:09:26.662 "name": "Passthru0", 00:09:26.662 "base_bdev_name": "Malloc2" 00:09:26.662 } 00:09:26.662 } 00:09:26.662 } 00:09:26.662 ]' 00:09:26.662 16:26:57 -- rpc/rpc.sh@21 -- # jq length 00:09:26.662 16:26:58 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:26.662 16:26:58 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:26.662 16:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.662 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:09:26.662 16:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.662 16:26:58 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:26.662 16:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.662 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:09:26.662 16:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.662 16:26:58 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:26.662 16:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.662 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:09:26.662 16:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.662 16:26:58 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:26.662 16:26:58 -- rpc/rpc.sh@26 -- # jq length 00:09:26.662 16:26:58 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:26.662 00:09:26.662 real 0m0.278s 00:09:26.662 user 0m0.153s 00:09:26.662 sys 0m0.053s 00:09:26.662 16:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.662 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:09:26.662 ************************************ 00:09:26.662 END TEST rpc_daemon_integrity 00:09:26.662 ************************************ 00:09:26.920 16:26:58 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:26.920 16:26:58 -- rpc/rpc.sh@84 -- # killprocess 114995 00:09:26.920 16:26:58 -- common/autotest_common.sh@926 -- # '[' -z 114995 ']' 00:09:26.920 16:26:58 -- common/autotest_common.sh@930 -- # kill -0 114995 00:09:26.920 16:26:58 -- common/autotest_common.sh@931 -- # uname 00:09:26.920 16:26:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:26.920 16:26:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114995 00:09:26.920 16:26:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:26.920 16:26:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:26.920 16:26:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114995' 00:09:26.920 killing process with pid 114995 00:09:26.920 16:26:58 -- common/autotest_common.sh@945 -- # kill 114995 00:09:26.920 16:26:58 -- common/autotest_common.sh@950 -- # wait 114995 00:09:27.486 00:09:27.486 real 0m3.014s 00:09:27.486 user 0m3.516s 00:09:27.486 sys 0m0.904s 00:09:27.486 16:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.486 ************************************ 00:09:27.486 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:09:27.486 END TEST rpc 00:09:27.486 ************************************ 00:09:27.486 16:26:58 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:27.486 16:26:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:27.486 16:26:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:27.486 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:09:27.486 ************************************ 00:09:27.486 START TEST rpc_client 00:09:27.486 ************************************ 00:09:27.486 16:26:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:27.744 * Looking for test storage... 00:09:27.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:27.745 16:26:59 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:27.745 OK 00:09:27.745 16:26:59 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:27.745 00:09:27.745 real 0m0.155s 00:09:27.745 user 0m0.091s 00:09:27.745 sys 0m0.078s 00:09:27.745 ************************************ 00:09:27.745 END TEST rpc_client 00:09:27.745 ************************************ 00:09:27.745 16:26:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.745 16:26:59 -- common/autotest_common.sh@10 -- # set +x 00:09:27.745 16:26:59 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:27.745 16:26:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:27.745 16:26:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:27.745 16:26:59 -- common/autotest_common.sh@10 -- # set +x 00:09:27.745 ************************************ 00:09:27.745 START TEST json_config 00:09:27.745 ************************************ 00:09:27.745 16:26:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:28.004 16:26:59 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:28.004 16:26:59 -- nvmf/common.sh@7 -- # uname -s 00:09:28.004 16:26:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.004 16:26:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.004 16:26:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.004 16:26:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.004 16:26:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.004 16:26:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.004 16:26:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.004 16:26:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.004 16:26:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.004 16:26:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.004 16:26:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5c946069-964b-446b-9f83-a8479f85bc60 00:09:28.004 16:26:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5c946069-964b-446b-9f83-a8479f85bc60 00:09:28.004 16:26:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.004 16:26:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.004 16:26:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:28.004 16:26:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.004 16:26:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.004 16:26:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.004 16:26:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.004 16:26:59 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:28.004 16:26:59 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:28.004 16:26:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:28.004 16:26:59 -- paths/export.sh@5 -- # export PATH 00:09:28.004 16:26:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:28.004 16:26:59 -- nvmf/common.sh@46 -- # : 0 00:09:28.004 16:26:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:28.004 16:26:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:28.004 16:26:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:28.004 16:26:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.004 16:26:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.004 16:26:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:28.004 16:26:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:28.004 16:26:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:28.004 16:26:59 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:09:28.004 16:26:59 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:09:28.004 16:26:59 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:09:28.004 16:26:59 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:28.004 16:26:59 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:09:28.004 16:26:59 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:09:28.004 16:26:59 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:28.004 16:26:59 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:09:28.004 16:26:59 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:28.004 16:26:59 -- json_config/json_config.sh@32 -- # declare -A app_params 00:09:28.004 16:26:59 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:28.004 16:26:59 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:09:28.004 16:26:59 -- json_config/json_config.sh@43 -- # last_event_id=0 00:09:28.004 16:26:59 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:28.004 INFO: JSON configuration test init 00:09:28.004 16:26:59 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:09:28.004 16:26:59 -- json_config/json_config.sh@420 -- # json_config_test_init 00:09:28.004 16:26:59 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:09:28.004 16:26:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:28.004 16:26:59 -- common/autotest_common.sh@10 -- # set +x 00:09:28.004 16:26:59 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:09:28.004 16:26:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:28.004 16:26:59 -- common/autotest_common.sh@10 -- # set +x 00:09:28.004 16:26:59 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:09:28.004 16:26:59 -- json_config/json_config.sh@98 -- # local app=target 00:09:28.004 16:26:59 -- json_config/json_config.sh@99 -- # shift 00:09:28.004 16:26:59 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:09:28.004 16:26:59 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:09:28.004 16:26:59 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:09:28.004 16:26:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:28.004 16:26:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:28.004 16:26:59 -- json_config/json_config.sh@111 -- # app_pid[$app]=115274 00:09:28.004 16:26:59 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:09:28.004 Waiting for target to run... 00:09:28.004 16:26:59 -- json_config/json_config.sh@114 -- # waitforlisten 115274 /var/tmp/spdk_tgt.sock 00:09:28.004 16:26:59 -- common/autotest_common.sh@819 -- # '[' -z 115274 ']' 00:09:28.004 16:26:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:28.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:28.004 16:26:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:28.004 16:26:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:28.004 16:26:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:28.004 16:26:59 -- common/autotest_common.sh@10 -- # set +x 00:09:28.005 16:26:59 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:28.005 [2024-07-13 16:26:59.366313] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:28.005 [2024-07-13 16:26:59.366585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115274 ] 00:09:28.572 [2024-07-13 16:26:59.961204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.572 [2024-07-13 16:27:00.005177] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:28.572 [2024-07-13 16:27:00.005520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.831 16:27:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:28.831 16:27:00 -- common/autotest_common.sh@852 -- # return 0 00:09:28.831 16:27:00 -- json_config/json_config.sh@115 -- # echo '' 00:09:28.831 00:09:28.831 16:27:00 -- json_config/json_config.sh@322 -- # create_accel_config 00:09:28.831 16:27:00 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:09:28.831 16:27:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:28.831 16:27:00 -- common/autotest_common.sh@10 -- # set +x 00:09:28.831 16:27:00 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:09:28.831 16:27:00 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:09:28.831 16:27:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:28.831 16:27:00 -- common/autotest_common.sh@10 -- # set +x 00:09:29.090 16:27:00 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:29.090 16:27:00 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:09:29.090 16:27:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:29.349 16:27:00 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:09:29.349 16:27:00 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:09:29.349 16:27:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:29.349 16:27:00 -- common/autotest_common.sh@10 -- # set +x 00:09:29.349 16:27:00 -- json_config/json_config.sh@48 -- # local ret=0 00:09:29.349 16:27:00 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:29.349 16:27:00 -- json_config/json_config.sh@49 -- # local enabled_types 00:09:29.349 16:27:00 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:29.349 16:27:00 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:29.349 16:27:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:29.608 16:27:00 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:29.608 16:27:00 -- json_config/json_config.sh@51 -- # local get_types 00:09:29.608 16:27:00 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:29.608 16:27:00 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:09:29.608 16:27:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:29.608 16:27:00 -- common/autotest_common.sh@10 -- # set +x 00:09:29.608 16:27:00 -- json_config/json_config.sh@58 -- # return 0 00:09:29.608 16:27:00 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:09:29.608 16:27:00 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:09:29.608 16:27:00 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:09:29.608 16:27:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:29.608 16:27:00 -- common/autotest_common.sh@10 -- # set +x 00:09:29.608 16:27:00 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:09:29.608 16:27:00 -- json_config/json_config.sh@160 -- # local expected_notifications 00:09:29.608 16:27:00 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:09:29.608 16:27:00 -- json_config/json_config.sh@164 -- # get_notifications 00:09:29.608 16:27:00 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:09:29.608 16:27:00 -- json_config/json_config.sh@64 -- # IFS=: 00:09:29.608 16:27:00 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:29.608 16:27:00 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:09:29.608 16:27:00 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:29.608 16:27:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:29.867 16:27:01 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:09:29.867 16:27:01 -- json_config/json_config.sh@64 -- # IFS=: 00:09:29.867 16:27:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:29.867 16:27:01 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:09:29.867 16:27:01 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:09:29.867 16:27:01 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:29.867 16:27:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:30.125 Nvme0n1p0 Nvme0n1p1 00:09:30.125 16:27:01 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:30.125 16:27:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:30.125 [2024-07-13 16:27:01.568841] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:30.125 [2024-07-13 16:27:01.568956] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:30.125 00:09:30.125 16:27:01 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:30.125 16:27:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:30.383 Malloc3 00:09:30.383 16:27:01 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:30.383 16:27:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:30.642 [2024-07-13 16:27:01.916975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:30.642 [2024-07-13 16:27:01.917109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.642 [2024-07-13 16:27:01.917159] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:09:30.642 [2024-07-13 16:27:01.917189] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.642 [2024-07-13 16:27:01.920131] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.642 [2024-07-13 16:27:01.920197] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:30.642 PTBdevFromMalloc3 00:09:30.642 16:27:01 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:30.642 16:27:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:30.642 Null0 00:09:30.900 16:27:02 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:30.900 16:27:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:30.900 Malloc0 00:09:30.900 16:27:02 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:30.900 16:27:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:31.158 Malloc1 00:09:31.159 16:27:02 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:31.159 16:27:02 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:31.725 102400+0 records in 00:09:31.725 102400+0 records out 00:09:31.725 104857600 bytes (105 MB, 100 MiB) copied, 0.442252 s, 237 MB/s 00:09:31.725 16:27:02 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:31.725 16:27:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:31.725 aio_disk 00:09:31.725 16:27:03 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:31.725 16:27:03 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:31.725 16:27:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:31.983 a9980233-d18e-4a12-9243-39e95f77b1cd 00:09:31.983 16:27:03 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:31.983 16:27:03 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:31.983 16:27:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:32.241 16:27:03 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:32.241 16:27:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:32.499 16:27:03 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:32.499 16:27:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:32.499 16:27:03 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:32.499 16:27:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:32.757 16:27:04 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:09:32.757 16:27:04 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:09:32.757 16:27:04 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a31d440c-7273-4954-95d4-8d3218809dd4 bdev_register:4c194997-d1ff-47ef-91d0-91edca70c157 bdev_register:8258c277-fb9c-4330-9427-883fad09b858 bdev_register:1ecd98e1-fa50-480b-91da-b0291b9642e1 00:09:32.757 16:27:04 -- json_config/json_config.sh@70 -- # local events_to_check 00:09:32.757 16:27:04 -- json_config/json_config.sh@71 -- # local recorded_events 00:09:32.757 16:27:04 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:09:32.757 16:27:04 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a31d440c-7273-4954-95d4-8d3218809dd4 bdev_register:4c194997-d1ff-47ef-91d0-91edca70c157 bdev_register:8258c277-fb9c-4330-9427-883fad09b858 bdev_register:1ecd98e1-fa50-480b-91da-b0291b9642e1 00:09:32.757 16:27:04 -- json_config/json_config.sh@74 -- # sort 00:09:32.757 16:27:04 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:09:32.757 16:27:04 -- json_config/json_config.sh@75 -- # get_notifications 00:09:32.757 16:27:04 -- json_config/json_config.sh@75 -- # sort 00:09:32.757 16:27:04 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:09:32.757 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:32.757 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:32.757 16:27:04 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:09:32.757 16:27:04 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:32.758 16:27:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:a31d440c-7273-4954-95d4-8d3218809dd4 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:4c194997-d1ff-47ef-91d0-91edca70c157 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:8258c277-fb9c-4330-9427-883fad09b858 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@65 -- # echo bdev_register:1ecd98e1-fa50-480b-91da-b0291b9642e1 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # IFS=: 00:09:33.016 16:27:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:33.016 16:27:04 -- json_config/json_config.sh@77 -- # [[ bdev_register:1ecd98e1-fa50-480b-91da-b0291b9642e1 bdev_register:4c194997-d1ff-47ef-91d0-91edca70c157 bdev_register:8258c277-fb9c-4330-9427-883fad09b858 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a31d440c-7273-4954-95d4-8d3218809dd4 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\e\c\d\9\8\e\1\-\f\a\5\0\-\4\8\0\b\-\9\1\d\a\-\b\0\2\9\1\b\9\6\4\2\e\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\c\1\9\4\9\9\7\-\d\1\f\f\-\4\7\e\f\-\9\1\d\0\-\9\1\e\d\c\a\7\0\c\1\5\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\2\5\8\c\2\7\7\-\f\b\9\c\-\4\3\3\0\-\9\4\2\7\-\8\8\3\f\a\d\0\9\b\8\5\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\3\1\d\4\4\0\c\-\7\2\7\3\-\4\9\5\4\-\9\5\d\4\-\8\d\3\2\1\8\8\0\9\d\d\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:09:33.016 16:27:04 -- json_config/json_config.sh@89 -- # cat 00:09:33.016 16:27:04 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:1ecd98e1-fa50-480b-91da-b0291b9642e1 bdev_register:4c194997-d1ff-47ef-91d0-91edca70c157 bdev_register:8258c277-fb9c-4330-9427-883fad09b858 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a31d440c-7273-4954-95d4-8d3218809dd4 bdev_register:aio_disk 00:09:33.016 Expected events matched: 00:09:33.016 bdev_register:1ecd98e1-fa50-480b-91da-b0291b9642e1 00:09:33.016 bdev_register:4c194997-d1ff-47ef-91d0-91edca70c157 00:09:33.016 bdev_register:8258c277-fb9c-4330-9427-883fad09b858 00:09:33.016 bdev_register:Malloc0 00:09:33.016 bdev_register:Malloc0p0 00:09:33.016 bdev_register:Malloc0p1 00:09:33.016 bdev_register:Malloc0p2 00:09:33.016 bdev_register:Malloc1 00:09:33.016 bdev_register:Malloc3 00:09:33.016 bdev_register:Null0 00:09:33.016 bdev_register:Nvme0n1 00:09:33.016 bdev_register:Nvme0n1p0 00:09:33.016 bdev_register:Nvme0n1p1 00:09:33.016 bdev_register:PTBdevFromMalloc3 00:09:33.016 bdev_register:a31d440c-7273-4954-95d4-8d3218809dd4 00:09:33.016 bdev_register:aio_disk 00:09:33.016 16:27:04 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:09:33.016 16:27:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:33.016 16:27:04 -- common/autotest_common.sh@10 -- # set +x 00:09:33.275 16:27:04 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:09:33.275 16:27:04 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:09:33.275 16:27:04 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:09:33.275 16:27:04 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:09:33.275 16:27:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:33.275 16:27:04 -- common/autotest_common.sh@10 -- # set +x 00:09:33.275 16:27:04 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:09:33.275 16:27:04 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:33.275 16:27:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:33.534 MallocBdevForConfigChangeCheck 00:09:33.534 16:27:04 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:09:33.534 16:27:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:33.534 16:27:04 -- common/autotest_common.sh@10 -- # set +x 00:09:33.534 16:27:04 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:09:33.534 16:27:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:33.792 INFO: shutting down applications... 00:09:33.792 16:27:05 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:09:33.792 16:27:05 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:09:33.792 16:27:05 -- json_config/json_config.sh@431 -- # json_config_clear target 00:09:33.792 16:27:05 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:09:33.792 16:27:05 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:34.051 [2024-07-13 16:27:05.427765] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:09:34.311 Calling clear_vhost_scsi_subsystem 00:09:34.311 Calling clear_iscsi_subsystem 00:09:34.311 Calling clear_vhost_blk_subsystem 00:09:34.311 Calling clear_nbd_subsystem 00:09:34.311 Calling clear_nvmf_subsystem 00:09:34.311 Calling clear_bdev_subsystem 00:09:34.311 Calling clear_accel_subsystem 00:09:34.311 Calling clear_iobuf_subsystem 00:09:34.311 Calling clear_sock_subsystem 00:09:34.311 Calling clear_vmd_subsystem 00:09:34.311 Calling clear_scheduler_subsystem 00:09:34.311 16:27:05 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:34.311 16:27:05 -- json_config/json_config.sh@396 -- # count=100 00:09:34.311 16:27:05 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:09:34.311 16:27:05 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:34.311 16:27:05 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:34.311 16:27:05 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:34.570 16:27:05 -- json_config/json_config.sh@398 -- # break 00:09:34.570 16:27:05 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:09:34.570 16:27:05 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:09:34.570 16:27:05 -- json_config/json_config.sh@120 -- # local app=target 00:09:34.570 16:27:05 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:09:34.570 16:27:05 -- json_config/json_config.sh@124 -- # [[ -n 115274 ]] 00:09:34.570 16:27:05 -- json_config/json_config.sh@127 -- # kill -SIGINT 115274 00:09:34.570 16:27:05 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:09:34.570 16:27:05 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:34.570 16:27:05 -- json_config/json_config.sh@130 -- # kill -0 115274 00:09:34.570 16:27:05 -- json_config/json_config.sh@134 -- # sleep 0.5 00:09:35.137 16:27:06 -- json_config/json_config.sh@129 -- # (( i++ )) 00:09:35.137 16:27:06 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:35.137 16:27:06 -- json_config/json_config.sh@130 -- # kill -0 115274 00:09:35.137 16:27:06 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:09:35.137 16:27:06 -- json_config/json_config.sh@132 -- # break 00:09:35.137 16:27:06 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:09:35.137 16:27:06 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:09:35.137 SPDK target shutdown done 00:09:35.137 16:27:06 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:09:35.137 INFO: relaunching applications... 00:09:35.137 16:27:06 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:35.137 16:27:06 -- json_config/json_config.sh@98 -- # local app=target 00:09:35.137 16:27:06 -- json_config/json_config.sh@99 -- # shift 00:09:35.137 16:27:06 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:09:35.137 16:27:06 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:09:35.137 16:27:06 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:09:35.137 16:27:06 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:35.137 16:27:06 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:35.137 16:27:06 -- json_config/json_config.sh@111 -- # app_pid[$app]=115512 00:09:35.137 Waiting for target to run... 00:09:35.137 16:27:06 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:09:35.137 16:27:06 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:35.137 16:27:06 -- json_config/json_config.sh@114 -- # waitforlisten 115512 /var/tmp/spdk_tgt.sock 00:09:35.137 16:27:06 -- common/autotest_common.sh@819 -- # '[' -z 115512 ']' 00:09:35.137 16:27:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:35.137 16:27:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:35.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:35.137 16:27:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:35.137 16:27:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:35.137 16:27:06 -- common/autotest_common.sh@10 -- # set +x 00:09:35.137 [2024-07-13 16:27:06.552705] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:35.137 [2024-07-13 16:27:06.552947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115512 ] 00:09:35.702 [2024-07-13 16:27:07.108086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.702 [2024-07-13 16:27:07.151044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:35.702 [2024-07-13 16:27:07.151337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.960 [2024-07-13 16:27:07.301744] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:35.960 [2024-07-13 16:27:07.301864] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:35.960 [2024-07-13 16:27:07.309681] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:35.960 [2024-07-13 16:27:07.309743] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:35.960 [2024-07-13 16:27:07.317742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:35.960 [2024-07-13 16:27:07.317809] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:35.960 [2024-07-13 16:27:07.317845] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:35.960 [2024-07-13 16:27:07.404497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:35.960 [2024-07-13 16:27:07.404595] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.960 [2024-07-13 16:27:07.404625] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:35.960 [2024-07-13 16:27:07.404659] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.960 [2024-07-13 16:27:07.405218] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.960 [2024-07-13 16:27:07.405276] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:36.218 16:27:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:36.218 16:27:07 -- common/autotest_common.sh@852 -- # return 0 00:09:36.218 00:09:36.218 INFO: Checking if target configuration is the same... 00:09:36.218 16:27:07 -- json_config/json_config.sh@115 -- # echo '' 00:09:36.218 16:27:07 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:09:36.218 16:27:07 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:36.218 16:27:07 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:36.218 16:27:07 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:09:36.218 16:27:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:36.218 + '[' 2 -ne 2 ']' 00:09:36.218 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:36.218 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:36.218 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:36.218 +++ basename /dev/fd/62 00:09:36.218 ++ mktemp /tmp/62.XXX 00:09:36.218 + tmp_file_1=/tmp/62.a8J 00:09:36.218 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:36.218 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:36.218 + tmp_file_2=/tmp/spdk_tgt_config.json.1mp 00:09:36.218 + ret=0 00:09:36.218 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:36.477 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:36.477 + diff -u /tmp/62.a8J /tmp/spdk_tgt_config.json.1mp 00:09:36.477 INFO: JSON config files are the same 00:09:36.477 + echo 'INFO: JSON config files are the same' 00:09:36.477 + rm /tmp/62.a8J /tmp/spdk_tgt_config.json.1mp 00:09:36.477 + exit 0 00:09:36.477 16:27:07 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:09:36.477 INFO: changing configuration and checking if this can be detected... 00:09:36.477 16:27:07 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:36.477 16:27:07 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:36.477 16:27:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:36.735 16:27:08 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:36.735 16:27:08 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:09:36.735 16:27:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:36.735 + '[' 2 -ne 2 ']' 00:09:36.735 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:36.735 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:36.735 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:36.735 +++ basename /dev/fd/62 00:09:36.735 ++ mktemp /tmp/62.XXX 00:09:36.735 + tmp_file_1=/tmp/62.F4K 00:09:36.735 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:36.735 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:36.735 + tmp_file_2=/tmp/spdk_tgt_config.json.gYx 00:09:36.735 + ret=0 00:09:36.735 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:36.991 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:37.248 + diff -u /tmp/62.F4K /tmp/spdk_tgt_config.json.gYx 00:09:37.248 + ret=1 00:09:37.248 + echo '=== Start of file: /tmp/62.F4K ===' 00:09:37.248 + cat /tmp/62.F4K 00:09:37.248 + echo '=== End of file: /tmp/62.F4K ===' 00:09:37.248 + echo '' 00:09:37.248 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gYx ===' 00:09:37.248 + cat /tmp/spdk_tgt_config.json.gYx 00:09:37.248 + echo '=== End of file: /tmp/spdk_tgt_config.json.gYx ===' 00:09:37.248 + echo '' 00:09:37.248 + rm /tmp/62.F4K /tmp/spdk_tgt_config.json.gYx 00:09:37.248 + exit 1 00:09:37.248 INFO: configuration change detected. 00:09:37.248 16:27:08 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:09:37.248 16:27:08 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:09:37.248 16:27:08 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:09:37.248 16:27:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:37.248 16:27:08 -- common/autotest_common.sh@10 -- # set +x 00:09:37.248 16:27:08 -- json_config/json_config.sh@360 -- # local ret=0 00:09:37.248 16:27:08 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:09:37.248 16:27:08 -- json_config/json_config.sh@370 -- # [[ -n 115512 ]] 00:09:37.248 16:27:08 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:09:37.248 16:27:08 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:09:37.248 16:27:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:37.248 16:27:08 -- common/autotest_common.sh@10 -- # set +x 00:09:37.248 16:27:08 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:09:37.248 16:27:08 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:09:37.248 16:27:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:09:37.506 16:27:08 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:09:37.506 16:27:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:09:37.506 16:27:08 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:09:37.506 16:27:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:09:37.762 16:27:09 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:09:37.763 16:27:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:09:38.021 16:27:09 -- json_config/json_config.sh@246 -- # uname -s 00:09:38.021 16:27:09 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:09:38.021 16:27:09 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:09:38.021 16:27:09 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:09:38.021 16:27:09 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:09:38.021 16:27:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:38.021 16:27:09 -- common/autotest_common.sh@10 -- # set +x 00:09:38.021 16:27:09 -- json_config/json_config.sh@376 -- # killprocess 115512 00:09:38.021 16:27:09 -- common/autotest_common.sh@926 -- # '[' -z 115512 ']' 00:09:38.021 16:27:09 -- common/autotest_common.sh@930 -- # kill -0 115512 00:09:38.021 16:27:09 -- common/autotest_common.sh@931 -- # uname 00:09:38.021 16:27:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:38.021 16:27:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115512 00:09:38.021 16:27:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:38.021 16:27:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:38.021 16:27:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115512' 00:09:38.021 killing process with pid 115512 00:09:38.021 16:27:09 -- common/autotest_common.sh@945 -- # kill 115512 00:09:38.021 16:27:09 -- common/autotest_common.sh@950 -- # wait 115512 00:09:38.587 16:27:09 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:38.587 16:27:09 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:09:38.587 16:27:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:38.587 16:27:09 -- common/autotest_common.sh@10 -- # set +x 00:09:38.587 16:27:09 -- json_config/json_config.sh@381 -- # return 0 00:09:38.587 16:27:09 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:09:38.587 INFO: Success 00:09:38.587 ************************************ 00:09:38.587 END TEST json_config 00:09:38.587 ************************************ 00:09:38.587 00:09:38.587 real 0m10.782s 00:09:38.587 user 0m15.412s 00:09:38.587 sys 0m3.202s 00:09:38.587 16:27:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.587 16:27:09 -- common/autotest_common.sh@10 -- # set +x 00:09:38.587 16:27:10 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:38.587 16:27:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:38.587 16:27:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:38.587 16:27:10 -- common/autotest_common.sh@10 -- # set +x 00:09:38.587 ************************************ 00:09:38.587 START TEST json_config_extra_key 00:09:38.587 ************************************ 00:09:38.587 16:27:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:38.847 16:27:10 -- nvmf/common.sh@7 -- # uname -s 00:09:38.847 16:27:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.847 16:27:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.847 16:27:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.847 16:27:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.847 16:27:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.847 16:27:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.847 16:27:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.847 16:27:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.847 16:27:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.847 16:27:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.847 16:27:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9725b0c0-7026-4013-abdb-e384a816b2bc 00:09:38.847 16:27:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=9725b0c0-7026-4013-abdb-e384a816b2bc 00:09:38.847 16:27:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.847 16:27:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.847 16:27:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:38.847 16:27:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:38.847 16:27:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.847 16:27:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.847 16:27:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.847 16:27:10 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:38.847 16:27:10 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:38.847 16:27:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:38.847 16:27:10 -- paths/export.sh@5 -- # export PATH 00:09:38.847 16:27:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:38.847 16:27:10 -- nvmf/common.sh@46 -- # : 0 00:09:38.847 16:27:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:38.847 16:27:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:38.847 16:27:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:38.847 16:27:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.847 16:27:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.847 16:27:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:38.847 16:27:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:38.847 16:27:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:38.847 INFO: launching applications... 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@25 -- # shift 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=115675 00:09:38.847 Waiting for target to run... 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 115675 /var/tmp/spdk_tgt.sock 00:09:38.847 16:27:10 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:38.847 16:27:10 -- common/autotest_common.sh@819 -- # '[' -z 115675 ']' 00:09:38.847 16:27:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:38.847 16:27:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:38.847 16:27:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:38.847 16:27:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.847 16:27:10 -- common/autotest_common.sh@10 -- # set +x 00:09:38.847 [2024-07-13 16:27:10.201199] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:38.847 [2024-07-13 16:27:10.201445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115675 ] 00:09:39.415 [2024-07-13 16:27:10.775732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.415 [2024-07-13 16:27:10.813344] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:39.415 [2024-07-13 16:27:10.813565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.673 16:27:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:39.673 00:09:39.673 INFO: shutting down applications... 00:09:39.673 16:27:10 -- common/autotest_common.sh@852 -- # return 0 00:09:39.673 16:27:10 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:09:39.673 16:27:10 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:09:39.673 16:27:10 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:09:39.673 16:27:10 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:09:39.673 16:27:10 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:09:39.673 16:27:10 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 115675 ]] 00:09:39.673 16:27:10 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 115675 00:09:39.673 16:27:10 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:09:39.673 16:27:10 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:39.673 16:27:10 -- json_config/json_config_extra_key.sh@50 -- # kill -0 115675 00:09:39.673 16:27:10 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:40.240 16:27:11 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:40.240 16:27:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:40.240 16:27:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 115675 00:09:40.240 16:27:11 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:40.807 16:27:11 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:40.807 16:27:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:40.807 16:27:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 115675 00:09:40.807 16:27:11 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:09:40.807 16:27:11 -- json_config/json_config_extra_key.sh@52 -- # break 00:09:40.807 SPDK target shutdown done 00:09:40.807 16:27:11 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:09:40.807 16:27:11 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:09:40.807 Success 00:09:40.807 16:27:11 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:09:40.807 ************************************ 00:09:40.807 END TEST json_config_extra_key 00:09:40.807 ************************************ 00:09:40.807 00:09:40.807 real 0m1.977s 00:09:40.807 user 0m1.305s 00:09:40.807 sys 0m0.670s 00:09:40.807 16:27:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.807 16:27:11 -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 16:27:12 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:40.807 16:27:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:40.807 16:27:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.807 16:27:12 -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 ************************************ 00:09:40.807 START TEST alias_rpc 00:09:40.807 ************************************ 00:09:40.807 16:27:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:40.807 * Looking for test storage... 00:09:40.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:40.807 16:27:12 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:40.807 16:27:12 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=115755 00:09:40.807 16:27:12 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 115755 00:09:40.807 16:27:12 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:40.807 16:27:12 -- common/autotest_common.sh@819 -- # '[' -z 115755 ']' 00:09:40.807 16:27:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.807 16:27:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:40.807 16:27:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.807 16:27:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:40.807 16:27:12 -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 [2024-07-13 16:27:12.237224] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:40.807 [2024-07-13 16:27:12.238037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115755 ] 00:09:41.066 [2024-07-13 16:27:12.392296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.066 [2024-07-13 16:27:12.468027] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:41.066 [2024-07-13 16:27:12.468297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.998 16:27:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:41.998 16:27:13 -- common/autotest_common.sh@852 -- # return 0 00:09:41.998 16:27:13 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:41.998 16:27:13 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 115755 00:09:41.998 16:27:13 -- common/autotest_common.sh@926 -- # '[' -z 115755 ']' 00:09:41.998 16:27:13 -- common/autotest_common.sh@930 -- # kill -0 115755 00:09:41.998 16:27:13 -- common/autotest_common.sh@931 -- # uname 00:09:41.998 16:27:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:41.998 16:27:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115755 00:09:41.998 16:27:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:41.998 16:27:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:41.998 killing process with pid 115755 00:09:41.998 16:27:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115755' 00:09:41.998 16:27:13 -- common/autotest_common.sh@945 -- # kill 115755 00:09:41.998 16:27:13 -- common/autotest_common.sh@950 -- # wait 115755 00:09:42.930 ************************************ 00:09:42.930 END TEST alias_rpc 00:09:42.930 ************************************ 00:09:42.930 00:09:42.930 real 0m2.075s 00:09:42.930 user 0m2.060s 00:09:42.930 sys 0m0.670s 00:09:42.930 16:27:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.930 16:27:14 -- common/autotest_common.sh@10 -- # set +x 00:09:42.930 16:27:14 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:09:42.930 16:27:14 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:42.930 16:27:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:42.930 16:27:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:42.930 16:27:14 -- common/autotest_common.sh@10 -- # set +x 00:09:42.930 ************************************ 00:09:42.930 START TEST spdkcli_tcp 00:09:42.930 ************************************ 00:09:42.930 16:27:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:42.930 * Looking for test storage... 00:09:42.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:42.930 16:27:14 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:42.930 16:27:14 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:42.930 16:27:14 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:42.930 16:27:14 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:42.930 16:27:14 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:42.930 16:27:14 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:42.930 16:27:14 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:42.930 16:27:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:42.930 16:27:14 -- common/autotest_common.sh@10 -- # set +x 00:09:42.930 16:27:14 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=115840 00:09:42.930 16:27:14 -- spdkcli/tcp.sh@27 -- # waitforlisten 115840 00:09:42.930 16:27:14 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:42.930 16:27:14 -- common/autotest_common.sh@819 -- # '[' -z 115840 ']' 00:09:42.930 16:27:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.930 16:27:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:42.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.930 16:27:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.930 16:27:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:42.930 16:27:14 -- common/autotest_common.sh@10 -- # set +x 00:09:42.930 [2024-07-13 16:27:14.393651] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:42.930 [2024-07-13 16:27:14.393928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115840 ] 00:09:43.188 [2024-07-13 16:27:14.557980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:43.188 [2024-07-13 16:27:14.645088] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:43.188 [2024-07-13 16:27:14.645575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.188 [2024-07-13 16:27:14.645582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.125 16:27:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:44.125 16:27:15 -- common/autotest_common.sh@852 -- # return 0 00:09:44.125 16:27:15 -- spdkcli/tcp.sh@31 -- # socat_pid=115862 00:09:44.125 16:27:15 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:44.125 16:27:15 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:44.125 [ 00:09:44.125 "spdk_get_version", 00:09:44.125 "rpc_get_methods", 00:09:44.125 "trace_get_info", 00:09:44.125 "trace_get_tpoint_group_mask", 00:09:44.125 "trace_disable_tpoint_group", 00:09:44.125 "trace_enable_tpoint_group", 00:09:44.125 "trace_clear_tpoint_mask", 00:09:44.125 "trace_set_tpoint_mask", 00:09:44.125 "framework_get_pci_devices", 00:09:44.125 "framework_get_config", 00:09:44.125 "framework_get_subsystems", 00:09:44.125 "iobuf_get_stats", 00:09:44.125 "iobuf_set_options", 00:09:44.125 "sock_set_default_impl", 00:09:44.125 "sock_impl_set_options", 00:09:44.125 "sock_impl_get_options", 00:09:44.125 "vmd_rescan", 00:09:44.125 "vmd_remove_device", 00:09:44.125 "vmd_enable", 00:09:44.125 "accel_get_stats", 00:09:44.125 "accel_set_options", 00:09:44.125 "accel_set_driver", 00:09:44.125 "accel_crypto_key_destroy", 00:09:44.125 "accel_crypto_keys_get", 00:09:44.125 "accel_crypto_key_create", 00:09:44.125 "accel_assign_opc", 00:09:44.125 "accel_get_module_info", 00:09:44.125 "accel_get_opc_assignments", 00:09:44.125 "notify_get_notifications", 00:09:44.125 "notify_get_types", 00:09:44.125 "bdev_get_histogram", 00:09:44.125 "bdev_enable_histogram", 00:09:44.125 "bdev_set_qos_limit", 00:09:44.125 "bdev_set_qd_sampling_period", 00:09:44.125 "bdev_get_bdevs", 00:09:44.125 "bdev_reset_iostat", 00:09:44.125 "bdev_get_iostat", 00:09:44.125 "bdev_examine", 00:09:44.125 "bdev_wait_for_examine", 00:09:44.125 "bdev_set_options", 00:09:44.125 "scsi_get_devices", 00:09:44.125 "thread_set_cpumask", 00:09:44.125 "framework_get_scheduler", 00:09:44.125 "framework_set_scheduler", 00:09:44.125 "framework_get_reactors", 00:09:44.125 "thread_get_io_channels", 00:09:44.125 "thread_get_pollers", 00:09:44.125 "thread_get_stats", 00:09:44.125 "framework_monitor_context_switch", 00:09:44.125 "spdk_kill_instance", 00:09:44.125 "log_enable_timestamps", 00:09:44.125 "log_get_flags", 00:09:44.125 "log_clear_flag", 00:09:44.125 "log_set_flag", 00:09:44.125 "log_get_level", 00:09:44.125 "log_set_level", 00:09:44.125 "log_get_print_level", 00:09:44.125 "log_set_print_level", 00:09:44.125 "framework_enable_cpumask_locks", 00:09:44.125 "framework_disable_cpumask_locks", 00:09:44.125 "framework_wait_init", 00:09:44.125 "framework_start_init", 00:09:44.125 "virtio_blk_create_transport", 00:09:44.125 "virtio_blk_get_transports", 00:09:44.125 "vhost_controller_set_coalescing", 00:09:44.125 "vhost_get_controllers", 00:09:44.125 "vhost_delete_controller", 00:09:44.125 "vhost_create_blk_controller", 00:09:44.125 "vhost_scsi_controller_remove_target", 00:09:44.125 "vhost_scsi_controller_add_target", 00:09:44.125 "vhost_start_scsi_controller", 00:09:44.125 "vhost_create_scsi_controller", 00:09:44.125 "nbd_get_disks", 00:09:44.126 "nbd_stop_disk", 00:09:44.126 "nbd_start_disk", 00:09:44.126 "env_dpdk_get_mem_stats", 00:09:44.126 "nvmf_subsystem_get_listeners", 00:09:44.126 "nvmf_subsystem_get_qpairs", 00:09:44.126 "nvmf_subsystem_get_controllers", 00:09:44.126 "nvmf_get_stats", 00:09:44.126 "nvmf_get_transports", 00:09:44.126 "nvmf_create_transport", 00:09:44.126 "nvmf_get_targets", 00:09:44.126 "nvmf_delete_target", 00:09:44.126 "nvmf_create_target", 00:09:44.126 "nvmf_subsystem_allow_any_host", 00:09:44.126 "nvmf_subsystem_remove_host", 00:09:44.126 "nvmf_subsystem_add_host", 00:09:44.126 "nvmf_subsystem_remove_ns", 00:09:44.126 "nvmf_subsystem_add_ns", 00:09:44.126 "nvmf_subsystem_listener_set_ana_state", 00:09:44.126 "nvmf_discovery_get_referrals", 00:09:44.126 "nvmf_discovery_remove_referral", 00:09:44.126 "nvmf_discovery_add_referral", 00:09:44.126 "nvmf_subsystem_remove_listener", 00:09:44.126 "nvmf_subsystem_add_listener", 00:09:44.126 "nvmf_delete_subsystem", 00:09:44.126 "nvmf_create_subsystem", 00:09:44.126 "nvmf_get_subsystems", 00:09:44.126 "nvmf_set_crdt", 00:09:44.126 "nvmf_set_config", 00:09:44.126 "nvmf_set_max_subsystems", 00:09:44.126 "iscsi_set_options", 00:09:44.126 "iscsi_get_auth_groups", 00:09:44.126 "iscsi_auth_group_remove_secret", 00:09:44.126 "iscsi_auth_group_add_secret", 00:09:44.126 "iscsi_delete_auth_group", 00:09:44.126 "iscsi_create_auth_group", 00:09:44.126 "iscsi_set_discovery_auth", 00:09:44.126 "iscsi_get_options", 00:09:44.126 "iscsi_target_node_request_logout", 00:09:44.126 "iscsi_target_node_set_redirect", 00:09:44.126 "iscsi_target_node_set_auth", 00:09:44.126 "iscsi_target_node_add_lun", 00:09:44.126 "iscsi_get_connections", 00:09:44.126 "iscsi_portal_group_set_auth", 00:09:44.126 "iscsi_start_portal_group", 00:09:44.126 "iscsi_delete_portal_group", 00:09:44.126 "iscsi_create_portal_group", 00:09:44.126 "iscsi_get_portal_groups", 00:09:44.126 "iscsi_delete_target_node", 00:09:44.126 "iscsi_target_node_remove_pg_ig_maps", 00:09:44.126 "iscsi_target_node_add_pg_ig_maps", 00:09:44.126 "iscsi_create_target_node", 00:09:44.126 "iscsi_get_target_nodes", 00:09:44.126 "iscsi_delete_initiator_group", 00:09:44.126 "iscsi_initiator_group_remove_initiators", 00:09:44.126 "iscsi_initiator_group_add_initiators", 00:09:44.126 "iscsi_create_initiator_group", 00:09:44.126 "iscsi_get_initiator_groups", 00:09:44.126 "iaa_scan_accel_module", 00:09:44.126 "dsa_scan_accel_module", 00:09:44.126 "ioat_scan_accel_module", 00:09:44.126 "accel_error_inject_error", 00:09:44.126 "bdev_iscsi_delete", 00:09:44.126 "bdev_iscsi_create", 00:09:44.126 "bdev_iscsi_set_options", 00:09:44.126 "bdev_virtio_attach_controller", 00:09:44.126 "bdev_virtio_scsi_get_devices", 00:09:44.126 "bdev_virtio_detach_controller", 00:09:44.126 "bdev_virtio_blk_set_hotplug", 00:09:44.126 "bdev_ftl_set_property", 00:09:44.126 "bdev_ftl_get_properties", 00:09:44.126 "bdev_ftl_get_stats", 00:09:44.126 "bdev_ftl_unmap", 00:09:44.126 "bdev_ftl_unload", 00:09:44.126 "bdev_ftl_delete", 00:09:44.126 "bdev_ftl_load", 00:09:44.126 "bdev_ftl_create", 00:09:44.126 "bdev_aio_delete", 00:09:44.126 "bdev_aio_rescan", 00:09:44.126 "bdev_aio_create", 00:09:44.126 "blobfs_create", 00:09:44.126 "blobfs_detect", 00:09:44.126 "blobfs_set_cache_size", 00:09:44.126 "bdev_zone_block_delete", 00:09:44.126 "bdev_zone_block_create", 00:09:44.126 "bdev_delay_delete", 00:09:44.126 "bdev_delay_create", 00:09:44.126 "bdev_delay_update_latency", 00:09:44.126 "bdev_split_delete", 00:09:44.126 "bdev_split_create", 00:09:44.126 "bdev_error_inject_error", 00:09:44.126 "bdev_error_delete", 00:09:44.126 "bdev_error_create", 00:09:44.126 "bdev_raid_set_options", 00:09:44.126 "bdev_raid_remove_base_bdev", 00:09:44.126 "bdev_raid_add_base_bdev", 00:09:44.126 "bdev_raid_delete", 00:09:44.126 "bdev_raid_create", 00:09:44.126 "bdev_raid_get_bdevs", 00:09:44.126 "bdev_lvol_grow_lvstore", 00:09:44.126 "bdev_lvol_get_lvols", 00:09:44.126 "bdev_lvol_get_lvstores", 00:09:44.126 "bdev_lvol_delete", 00:09:44.126 "bdev_lvol_set_read_only", 00:09:44.126 "bdev_lvol_resize", 00:09:44.126 "bdev_lvol_decouple_parent", 00:09:44.126 "bdev_lvol_inflate", 00:09:44.126 "bdev_lvol_rename", 00:09:44.126 "bdev_lvol_clone_bdev", 00:09:44.126 "bdev_lvol_clone", 00:09:44.126 "bdev_lvol_snapshot", 00:09:44.126 "bdev_lvol_create", 00:09:44.126 "bdev_lvol_delete_lvstore", 00:09:44.126 "bdev_lvol_rename_lvstore", 00:09:44.126 "bdev_lvol_create_lvstore", 00:09:44.126 "bdev_passthru_delete", 00:09:44.126 "bdev_passthru_create", 00:09:44.126 "bdev_nvme_cuse_unregister", 00:09:44.126 "bdev_nvme_cuse_register", 00:09:44.126 "bdev_opal_new_user", 00:09:44.126 "bdev_opal_set_lock_state", 00:09:44.126 "bdev_opal_delete", 00:09:44.126 "bdev_opal_get_info", 00:09:44.126 "bdev_opal_create", 00:09:44.126 "bdev_nvme_opal_revert", 00:09:44.126 "bdev_nvme_opal_init", 00:09:44.126 "bdev_nvme_send_cmd", 00:09:44.126 "bdev_nvme_get_path_iostat", 00:09:44.126 "bdev_nvme_get_mdns_discovery_info", 00:09:44.126 "bdev_nvme_stop_mdns_discovery", 00:09:44.126 "bdev_nvme_start_mdns_discovery", 00:09:44.126 "bdev_nvme_set_multipath_policy", 00:09:44.126 "bdev_nvme_set_preferred_path", 00:09:44.126 "bdev_nvme_get_io_paths", 00:09:44.126 "bdev_nvme_remove_error_injection", 00:09:44.126 "bdev_nvme_add_error_injection", 00:09:44.126 "bdev_nvme_get_discovery_info", 00:09:44.126 "bdev_nvme_stop_discovery", 00:09:44.126 "bdev_nvme_start_discovery", 00:09:44.126 "bdev_nvme_get_controller_health_info", 00:09:44.126 "bdev_nvme_disable_controller", 00:09:44.126 "bdev_nvme_enable_controller", 00:09:44.126 "bdev_nvme_reset_controller", 00:09:44.126 "bdev_nvme_get_transport_statistics", 00:09:44.126 "bdev_nvme_apply_firmware", 00:09:44.126 "bdev_nvme_detach_controller", 00:09:44.126 "bdev_nvme_get_controllers", 00:09:44.126 "bdev_nvme_attach_controller", 00:09:44.126 "bdev_nvme_set_hotplug", 00:09:44.126 "bdev_nvme_set_options", 00:09:44.126 "bdev_null_resize", 00:09:44.126 "bdev_null_delete", 00:09:44.126 "bdev_null_create", 00:09:44.126 "bdev_malloc_delete", 00:09:44.126 "bdev_malloc_create" 00:09:44.126 ] 00:09:44.126 16:27:15 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:44.126 16:27:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:44.126 16:27:15 -- common/autotest_common.sh@10 -- # set +x 00:09:44.385 16:27:15 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:44.386 16:27:15 -- spdkcli/tcp.sh@38 -- # killprocess 115840 00:09:44.386 16:27:15 -- common/autotest_common.sh@926 -- # '[' -z 115840 ']' 00:09:44.386 16:27:15 -- common/autotest_common.sh@930 -- # kill -0 115840 00:09:44.386 16:27:15 -- common/autotest_common.sh@931 -- # uname 00:09:44.386 16:27:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:44.386 16:27:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115840 00:09:44.386 16:27:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:44.386 16:27:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:44.386 16:27:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115840' 00:09:44.386 killing process with pid 115840 00:09:44.386 16:27:15 -- common/autotest_common.sh@945 -- # kill 115840 00:09:44.386 16:27:15 -- common/autotest_common.sh@950 -- # wait 115840 00:09:44.952 00:09:44.952 real 0m2.127s 00:09:44.952 user 0m3.599s 00:09:44.952 sys 0m0.706s 00:09:44.952 16:27:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.952 ************************************ 00:09:44.952 END TEST spdkcli_tcp 00:09:44.952 ************************************ 00:09:44.952 16:27:16 -- common/autotest_common.sh@10 -- # set +x 00:09:44.952 16:27:16 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:44.952 16:27:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:44.952 16:27:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:44.952 16:27:16 -- common/autotest_common.sh@10 -- # set +x 00:09:44.952 ************************************ 00:09:44.952 START TEST dpdk_mem_utility 00:09:44.952 ************************************ 00:09:44.952 16:27:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:45.210 * Looking for test storage... 00:09:45.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:45.210 16:27:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:45.210 16:27:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=115940 00:09:45.210 16:27:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 115940 00:09:45.210 16:27:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:45.210 16:27:16 -- common/autotest_common.sh@819 -- # '[' -z 115940 ']' 00:09:45.211 16:27:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.211 16:27:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:45.211 16:27:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.211 16:27:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:45.211 16:27:16 -- common/autotest_common.sh@10 -- # set +x 00:09:45.211 [2024-07-13 16:27:16.576606] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:45.211 [2024-07-13 16:27:16.576865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115940 ] 00:09:45.469 [2024-07-13 16:27:16.730412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.469 [2024-07-13 16:27:16.815605] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:45.469 [2024-07-13 16:27:16.815930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.406 16:27:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:46.406 16:27:17 -- common/autotest_common.sh@852 -- # return 0 00:09:46.406 16:27:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:46.406 16:27:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:46.406 16:27:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:46.406 16:27:17 -- common/autotest_common.sh@10 -- # set +x 00:09:46.406 { 00:09:46.406 "filename": "/tmp/spdk_mem_dump.txt" 00:09:46.406 } 00:09:46.406 16:27:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:46.406 16:27:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:46.406 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:46.406 1 heaps totaling size 814.000000 MiB 00:09:46.406 size: 814.000000 MiB heap id: 0 00:09:46.406 end heaps---------- 00:09:46.406 8 mempools totaling size 598.116089 MiB 00:09:46.406 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:46.406 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:46.406 size: 84.521057 MiB name: bdev_io_115940 00:09:46.406 size: 51.011292 MiB name: evtpool_115940 00:09:46.406 size: 50.003479 MiB name: msgpool_115940 00:09:46.406 size: 21.763794 MiB name: PDU_Pool 00:09:46.406 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:46.406 size: 0.026123 MiB name: Session_Pool 00:09:46.406 end mempools------- 00:09:46.406 6 memzones totaling size 4.142822 MiB 00:09:46.406 size: 1.000366 MiB name: RG_ring_0_115940 00:09:46.406 size: 1.000366 MiB name: RG_ring_1_115940 00:09:46.406 size: 1.000366 MiB name: RG_ring_4_115940 00:09:46.406 size: 1.000366 MiB name: RG_ring_5_115940 00:09:46.407 size: 0.125366 MiB name: RG_ring_2_115940 00:09:46.407 size: 0.015991 MiB name: RG_ring_3_115940 00:09:46.407 end memzones------- 00:09:46.407 16:27:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:46.407 heap id: 0 total size: 814.000000 MiB number of busy elements: 222 number of free elements: 15 00:09:46.407 list of free elements. size: 12.486206 MiB 00:09:46.407 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:46.407 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:46.407 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:46.407 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:46.407 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:46.407 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:46.407 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:46.407 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:46.407 element at address: 0x200000200000 with size: 0.837219 MiB 00:09:46.407 element at address: 0x20001aa00000 with size: 0.568054 MiB 00:09:46.407 element at address: 0x20000b200000 with size: 0.489624 MiB 00:09:46.407 element at address: 0x200000800000 with size: 0.486511 MiB 00:09:46.407 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:46.407 element at address: 0x200027e00000 with size: 0.402527 MiB 00:09:46.407 element at address: 0x200003a00000 with size: 0.351501 MiB 00:09:46.407 list of standard malloc elements. size: 199.251221 MiB 00:09:46.407 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:46.407 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:46.407 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:46.407 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:46.407 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:46.407 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:46.407 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:46.407 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:46.407 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:46.407 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000087c980 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:09:46.407 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:46.408 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e670c0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e67180 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6dd80 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:46.408 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:46.408 list of memzone associated elements. size: 602.262573 MiB 00:09:46.408 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:46.408 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:46.408 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:46.408 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:46.408 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:46.408 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_115940_0 00:09:46.408 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:46.408 associated memzone info: size: 48.002930 MiB name: MP_evtpool_115940_0 00:09:46.408 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:46.408 associated memzone info: size: 48.002930 MiB name: MP_msgpool_115940_0 00:09:46.408 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:46.408 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:46.408 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:46.408 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:46.408 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:46.408 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_115940 00:09:46.408 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:46.408 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_115940 00:09:46.408 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:46.408 associated memzone info: size: 1.007996 MiB name: MP_evtpool_115940 00:09:46.408 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:46.408 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:46.408 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:46.408 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:46.408 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:46.408 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:46.408 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:46.408 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:46.408 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:46.408 associated memzone info: size: 1.000366 MiB name: RG_ring_0_115940 00:09:46.408 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:46.408 associated memzone info: size: 1.000366 MiB name: RG_ring_1_115940 00:09:46.408 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:46.408 associated memzone info: size: 1.000366 MiB name: RG_ring_4_115940 00:09:46.408 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:46.408 associated memzone info: size: 1.000366 MiB name: RG_ring_5_115940 00:09:46.408 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:46.408 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_115940 00:09:46.408 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:46.408 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:46.408 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:46.408 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:46.408 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:46.408 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:46.408 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:46.408 associated memzone info: size: 0.125366 MiB name: RG_ring_2_115940 00:09:46.408 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:46.408 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:46.408 element at address: 0x200027e67240 with size: 0.023743 MiB 00:09:46.408 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:46.408 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:46.408 associated memzone info: size: 0.015991 MiB name: RG_ring_3_115940 00:09:46.408 element at address: 0x200027e6d380 with size: 0.002441 MiB 00:09:46.408 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:46.408 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:09:46.408 associated memzone info: size: 0.000183 MiB name: MP_msgpool_115940 00:09:46.408 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:46.408 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_115940 00:09:46.408 element at address: 0x200027e6de40 with size: 0.000305 MiB 00:09:46.408 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:46.408 16:27:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:46.408 16:27:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 115940 00:09:46.408 16:27:17 -- common/autotest_common.sh@926 -- # '[' -z 115940 ']' 00:09:46.408 16:27:17 -- common/autotest_common.sh@930 -- # kill -0 115940 00:09:46.408 16:27:17 -- common/autotest_common.sh@931 -- # uname 00:09:46.408 16:27:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:46.408 16:27:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115940 00:09:46.408 16:27:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:46.408 killing process with pid 115940 00:09:46.408 16:27:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:46.408 16:27:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115940' 00:09:46.408 16:27:17 -- common/autotest_common.sh@945 -- # kill 115940 00:09:46.408 16:27:17 -- common/autotest_common.sh@950 -- # wait 115940 00:09:46.975 00:09:46.975 real 0m1.973s 00:09:46.975 user 0m1.897s 00:09:46.975 sys 0m0.644s 00:09:46.975 16:27:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.975 ************************************ 00:09:46.975 END TEST dpdk_mem_utility 00:09:46.975 ************************************ 00:09:46.975 16:27:18 -- common/autotest_common.sh@10 -- # set +x 00:09:46.975 16:27:18 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:46.975 16:27:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:46.975 16:27:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:46.975 16:27:18 -- common/autotest_common.sh@10 -- # set +x 00:09:46.975 ************************************ 00:09:46.975 START TEST event 00:09:46.975 ************************************ 00:09:46.975 16:27:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:47.235 * Looking for test storage... 00:09:47.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:47.235 16:27:18 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:47.235 16:27:18 -- bdev/nbd_common.sh@6 -- # set -e 00:09:47.235 16:27:18 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:47.235 16:27:18 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:47.235 16:27:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:47.235 16:27:18 -- common/autotest_common.sh@10 -- # set +x 00:09:47.235 ************************************ 00:09:47.235 START TEST event_perf 00:09:47.235 ************************************ 00:09:47.235 16:27:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:47.235 Running I/O for 1 seconds...[2024-07-13 16:27:18.572772] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:47.235 [2024-07-13 16:27:18.573036] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116030 ] 00:09:47.492 [2024-07-13 16:27:18.746297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.492 [2024-07-13 16:27:18.823248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.492 [2024-07-13 16:27:18.823384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.492 [2024-07-13 16:27:18.824569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.492 [2024-07-13 16:27:18.824572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.863 Running I/O for 1 seconds... 00:09:48.863 lcore 0: 189948 00:09:48.863 lcore 1: 189949 00:09:48.863 lcore 2: 189948 00:09:48.863 lcore 3: 189948 00:09:48.863 done. 00:09:48.863 00:09:48.863 real 0m1.479s 00:09:48.863 user 0m4.243s 00:09:48.863 sys 0m0.137s 00:09:48.863 16:27:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.863 16:27:20 -- common/autotest_common.sh@10 -- # set +x 00:09:48.863 ************************************ 00:09:48.863 END TEST event_perf 00:09:48.863 ************************************ 00:09:48.863 16:27:20 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:48.863 16:27:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:48.863 16:27:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:48.863 16:27:20 -- common/autotest_common.sh@10 -- # set +x 00:09:48.863 ************************************ 00:09:48.863 START TEST event_reactor 00:09:48.863 ************************************ 00:09:48.863 16:27:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:48.863 [2024-07-13 16:27:20.103223] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:48.863 [2024-07-13 16:27:20.103505] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116078 ] 00:09:48.863 [2024-07-13 16:27:20.264666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.121 [2024-07-13 16:27:20.354597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.104 test_start 00:09:50.104 oneshot 00:09:50.104 tick 100 00:09:50.104 tick 100 00:09:50.104 tick 250 00:09:50.104 tick 100 00:09:50.104 tick 100 00:09:50.104 tick 100 00:09:50.104 tick 250 00:09:50.104 tick 500 00:09:50.104 tick 100 00:09:50.104 tick 100 00:09:50.104 tick 250 00:09:50.104 tick 100 00:09:50.104 tick 100 00:09:50.104 test_end 00:09:50.104 00:09:50.104 real 0m1.478s 00:09:50.104 user 0m1.254s 00:09:50.104 sys 0m0.124s 00:09:50.104 16:27:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.104 16:27:21 -- common/autotest_common.sh@10 -- # set +x 00:09:50.104 ************************************ 00:09:50.104 END TEST event_reactor 00:09:50.104 ************************************ 00:09:50.362 16:27:21 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:50.362 16:27:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:50.362 16:27:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:50.362 16:27:21 -- common/autotest_common.sh@10 -- # set +x 00:09:50.362 ************************************ 00:09:50.362 START TEST event_reactor_perf 00:09:50.362 ************************************ 00:09:50.362 16:27:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:50.363 [2024-07-13 16:27:21.648564] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:50.363 [2024-07-13 16:27:21.648856] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116118 ] 00:09:50.363 [2024-07-13 16:27:21.804574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.621 [2024-07-13 16:27:21.892925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.995 test_start 00:09:51.995 test_end 00:09:51.995 Performance: 359030 events per second 00:09:51.995 00:09:51.995 real 0m1.479s 00:09:51.995 user 0m1.234s 00:09:51.995 sys 0m0.144s 00:09:51.995 16:27:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.995 ************************************ 00:09:51.995 END TEST event_reactor_perf 00:09:51.995 ************************************ 00:09:51.995 16:27:23 -- common/autotest_common.sh@10 -- # set +x 00:09:51.995 16:27:23 -- event/event.sh@49 -- # uname -s 00:09:51.995 16:27:23 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:51.995 16:27:23 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:51.995 16:27:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:51.995 16:27:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:51.995 16:27:23 -- common/autotest_common.sh@10 -- # set +x 00:09:51.995 ************************************ 00:09:51.995 START TEST event_scheduler 00:09:51.995 ************************************ 00:09:51.995 16:27:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:51.995 * Looking for test storage... 00:09:51.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:51.995 16:27:23 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:51.995 16:27:23 -- scheduler/scheduler.sh@35 -- # scheduler_pid=116194 00:09:51.995 16:27:23 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:51.995 16:27:23 -- scheduler/scheduler.sh@37 -- # waitforlisten 116194 00:09:51.995 16:27:23 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:51.995 16:27:23 -- common/autotest_common.sh@819 -- # '[' -z 116194 ']' 00:09:51.995 16:27:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.995 16:27:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:51.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.995 16:27:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.995 16:27:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:51.995 16:27:23 -- common/autotest_common.sh@10 -- # set +x 00:09:51.995 [2024-07-13 16:27:23.358426] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:51.995 [2024-07-13 16:27:23.358720] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116194 ] 00:09:52.253 [2024-07-13 16:27:23.547578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.253 [2024-07-13 16:27:23.645472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.253 [2024-07-13 16:27:23.645595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.253 [2024-07-13 16:27:23.646738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.253 [2024-07-13 16:27:23.646755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.188 16:27:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:53.188 16:27:24 -- common/autotest_common.sh@852 -- # return 0 00:09:53.188 16:27:24 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:53.188 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.188 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.188 POWER: Env isn't set yet! 00:09:53.188 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:53.188 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:53.188 POWER: Cannot set governor of lcore 0 to userspace 00:09:53.188 POWER: Attempting to initialise PSTAT power management... 00:09:53.188 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:53.188 POWER: Cannot set governor of lcore 0 to performance 00:09:53.188 POWER: Attempting to initialise CPPC power management... 00:09:53.188 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:53.188 POWER: Cannot set governor of lcore 0 to userspace 00:09:53.188 POWER: Attempting to initialise VM power management... 00:09:53.188 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:53.188 POWER: Unable to set Power Management Environment for lcore 0 00:09:53.188 [2024-07-13 16:27:24.342430] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:09:53.188 [2024-07-13 16:27:24.342508] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:09:53.188 [2024-07-13 16:27:24.342561] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:09:53.188 [2024-07-13 16:27:24.342622] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:53.188 [2024-07-13 16:27:24.342678] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:53.188 [2024-07-13 16:27:24.342710] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:53.188 16:27:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.188 16:27:24 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:53.188 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.188 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.188 [2024-07-13 16:27:24.473249] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:53.188 16:27:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.188 16:27:24 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:53.188 16:27:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:53.188 16:27:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:53.188 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.188 ************************************ 00:09:53.188 START TEST scheduler_create_thread 00:09:53.188 ************************************ 00:09:53.188 16:27:24 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:09:53.188 16:27:24 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:53.188 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.188 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.188 2 00:09:53.188 16:27:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.188 16:27:24 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:53.188 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.188 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.188 3 00:09:53.188 16:27:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.188 16:27:24 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:53.188 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.188 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.188 4 00:09:53.188 16:27:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.188 16:27:24 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:53.188 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.188 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.188 5 00:09:53.188 16:27:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.188 16:27:24 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:53.188 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.188 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.188 6 00:09:53.188 16:27:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.188 16:27:24 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:53.188 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.188 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.188 7 00:09:53.188 16:27:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.188 16:27:24 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:53.188 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.189 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.189 8 00:09:53.189 16:27:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.189 16:27:24 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:53.189 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.189 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.189 9 00:09:53.189 16:27:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.189 16:27:24 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:53.189 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.189 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.189 10 00:09:53.189 16:27:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.189 16:27:24 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:53.189 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.189 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.189 16:27:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.189 16:27:24 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:53.189 16:27:24 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:53.189 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.189 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:53.189 16:27:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:53.189 16:27:24 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:53.189 16:27:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:53.189 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:09:54.179 16:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:54.179 16:27:25 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:54.179 16:27:25 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:54.179 16:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:54.179 16:27:25 -- common/autotest_common.sh@10 -- # set +x 00:09:55.554 16:27:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:55.554 00:09:55.554 real 0m2.141s 00:09:55.554 user 0m0.007s 00:09:55.554 sys 0m0.012s 00:09:55.554 ************************************ 00:09:55.554 END TEST scheduler_create_thread 00:09:55.554 ************************************ 00:09:55.554 16:27:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.554 16:27:26 -- common/autotest_common.sh@10 -- # set +x 00:09:55.554 16:27:26 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:55.554 16:27:26 -- scheduler/scheduler.sh@46 -- # killprocess 116194 00:09:55.554 16:27:26 -- common/autotest_common.sh@926 -- # '[' -z 116194 ']' 00:09:55.554 16:27:26 -- common/autotest_common.sh@930 -- # kill -0 116194 00:09:55.554 16:27:26 -- common/autotest_common.sh@931 -- # uname 00:09:55.554 16:27:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:55.554 16:27:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116194 00:09:55.554 16:27:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:55.554 16:27:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:55.554 killing process with pid 116194 00:09:55.554 16:27:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116194' 00:09:55.554 16:27:26 -- common/autotest_common.sh@945 -- # kill 116194 00:09:55.554 16:27:26 -- common/autotest_common.sh@950 -- # wait 116194 00:09:55.813 [2024-07-13 16:27:27.107807] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:56.071 00:09:56.071 real 0m4.369s 00:09:56.071 user 0m7.627s 00:09:56.071 sys 0m0.600s 00:09:56.071 16:27:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.071 16:27:27 -- common/autotest_common.sh@10 -- # set +x 00:09:56.071 ************************************ 00:09:56.071 END TEST event_scheduler 00:09:56.071 ************************************ 00:09:56.338 16:27:27 -- event/event.sh@51 -- # modprobe -n nbd 00:09:56.338 16:27:27 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:56.338 16:27:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:56.338 16:27:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:56.338 16:27:27 -- common/autotest_common.sh@10 -- # set +x 00:09:56.338 ************************************ 00:09:56.338 START TEST app_repeat 00:09:56.338 ************************************ 00:09:56.338 16:27:27 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:09:56.338 16:27:27 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.338 16:27:27 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:56.338 16:27:27 -- event/event.sh@13 -- # local nbd_list 00:09:56.338 16:27:27 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:56.338 16:27:27 -- event/event.sh@14 -- # local bdev_list 00:09:56.338 16:27:27 -- event/event.sh@15 -- # local repeat_times=4 00:09:56.338 16:27:27 -- event/event.sh@17 -- # modprobe nbd 00:09:56.338 16:27:27 -- event/event.sh@19 -- # repeat_pid=116305 00:09:56.338 16:27:27 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:56.338 16:27:27 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:56.338 Process app_repeat pid: 116305 00:09:56.338 16:27:27 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 116305' 00:09:56.338 16:27:27 -- event/event.sh@23 -- # for i in {0..2} 00:09:56.338 spdk_app_start Round 0 00:09:56.338 16:27:27 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:56.338 16:27:27 -- event/event.sh@25 -- # waitforlisten 116305 /var/tmp/spdk-nbd.sock 00:09:56.338 16:27:27 -- common/autotest_common.sh@819 -- # '[' -z 116305 ']' 00:09:56.338 16:27:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:56.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:56.338 16:27:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:56.338 16:27:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:56.338 16:27:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:56.338 16:27:27 -- common/autotest_common.sh@10 -- # set +x 00:09:56.338 [2024-07-13 16:27:27.653958] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:56.338 [2024-07-13 16:27:27.654248] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116305 ] 00:09:56.597 [2024-07-13 16:27:27.817202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:56.597 [2024-07-13 16:27:27.899266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.597 [2024-07-13 16:27:27.899268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.165 16:27:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:57.165 16:27:28 -- common/autotest_common.sh@852 -- # return 0 00:09:57.165 16:27:28 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:57.426 Malloc0 00:09:57.426 16:27:28 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:57.685 Malloc1 00:09:57.685 16:27:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@12 -- # local i 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:57.685 16:27:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:57.944 /dev/nbd0 00:09:57.944 16:27:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:57.944 16:27:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:57.944 16:27:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:57.944 16:27:29 -- common/autotest_common.sh@857 -- # local i 00:09:57.944 16:27:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:57.944 16:27:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:57.944 16:27:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:57.944 16:27:29 -- common/autotest_common.sh@861 -- # break 00:09:57.944 16:27:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:57.944 16:27:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:57.944 16:27:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:57.944 1+0 records in 00:09:57.944 1+0 records out 00:09:57.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280515 s, 14.6 MB/s 00:09:57.944 16:27:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:57.944 16:27:29 -- common/autotest_common.sh@874 -- # size=4096 00:09:57.944 16:27:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:57.944 16:27:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:57.944 16:27:29 -- common/autotest_common.sh@877 -- # return 0 00:09:57.944 16:27:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:57.944 16:27:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:57.944 16:27:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:58.203 /dev/nbd1 00:09:58.203 16:27:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:58.203 16:27:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:58.203 16:27:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:58.203 16:27:29 -- common/autotest_common.sh@857 -- # local i 00:09:58.203 16:27:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:58.203 16:27:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:58.203 16:27:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:58.461 16:27:29 -- common/autotest_common.sh@861 -- # break 00:09:58.461 16:27:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:58.461 16:27:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:58.461 16:27:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:58.461 1+0 records in 00:09:58.461 1+0 records out 00:09:58.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396965 s, 10.3 MB/s 00:09:58.461 16:27:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.461 16:27:29 -- common/autotest_common.sh@874 -- # size=4096 00:09:58.461 16:27:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.461 16:27:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:58.461 16:27:29 -- common/autotest_common.sh@877 -- # return 0 00:09:58.461 16:27:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.461 16:27:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:58.461 16:27:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:58.461 16:27:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.461 16:27:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:58.461 16:27:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:58.461 { 00:09:58.461 "nbd_device": "/dev/nbd0", 00:09:58.461 "bdev_name": "Malloc0" 00:09:58.461 }, 00:09:58.461 { 00:09:58.461 "nbd_device": "/dev/nbd1", 00:09:58.461 "bdev_name": "Malloc1" 00:09:58.461 } 00:09:58.461 ]' 00:09:58.461 16:27:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:58.461 { 00:09:58.461 "nbd_device": "/dev/nbd0", 00:09:58.461 "bdev_name": "Malloc0" 00:09:58.461 }, 00:09:58.461 { 00:09:58.461 "nbd_device": "/dev/nbd1", 00:09:58.461 "bdev_name": "Malloc1" 00:09:58.461 } 00:09:58.461 ]' 00:09:58.461 16:27:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:58.720 /dev/nbd1' 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:58.720 /dev/nbd1' 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@65 -- # count=2 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@95 -- # count=2 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:58.720 256+0 records in 00:09:58.720 256+0 records out 00:09:58.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00873861 s, 120 MB/s 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:58.720 16:27:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:58.720 256+0 records in 00:09:58.720 256+0 records out 00:09:58.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0341861 s, 30.7 MB/s 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:58.720 256+0 records in 00:09:58.720 256+0 records out 00:09:58.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280134 s, 37.4 MB/s 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@51 -- # local i 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.720 16:27:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:58.979 16:27:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:58.979 16:27:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:58.979 16:27:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:58.979 16:27:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:58.979 16:27:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:58.979 16:27:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:58.979 16:27:30 -- bdev/nbd_common.sh@41 -- # break 00:09:58.979 16:27:30 -- bdev/nbd_common.sh@45 -- # return 0 00:09:58.979 16:27:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.979 16:27:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:59.237 16:27:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:59.237 16:27:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:59.237 16:27:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:59.237 16:27:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.237 16:27:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.237 16:27:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:59.237 16:27:30 -- bdev/nbd_common.sh@41 -- # break 00:09:59.237 16:27:30 -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.237 16:27:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:59.237 16:27:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.237 16:27:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:59.495 16:27:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:59.495 16:27:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:59.495 16:27:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:59.753 16:27:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:59.753 16:27:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:59.753 16:27:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:59.753 16:27:30 -- bdev/nbd_common.sh@65 -- # true 00:09:59.753 16:27:30 -- bdev/nbd_common.sh@65 -- # count=0 00:09:59.753 16:27:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:59.753 16:27:30 -- bdev/nbd_common.sh@104 -- # count=0 00:09:59.753 16:27:30 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:59.753 16:27:30 -- bdev/nbd_common.sh@109 -- # return 0 00:09:59.753 16:27:30 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:00.012 16:27:31 -- event/event.sh@35 -- # sleep 3 00:10:00.270 [2024-07-13 16:27:31.621188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:00.270 [2024-07-13 16:27:31.700159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.270 [2024-07-13 16:27:31.700160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.528 [2024-07-13 16:27:31.778806] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:00.528 [2024-07-13 16:27:31.778947] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:03.060 16:27:34 -- event/event.sh@23 -- # for i in {0..2} 00:10:03.060 spdk_app_start Round 1 00:10:03.060 16:27:34 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:03.060 16:27:34 -- event/event.sh@25 -- # waitforlisten 116305 /var/tmp/spdk-nbd.sock 00:10:03.060 16:27:34 -- common/autotest_common.sh@819 -- # '[' -z 116305 ']' 00:10:03.060 16:27:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:03.060 16:27:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:03.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:03.060 16:27:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:03.060 16:27:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:03.060 16:27:34 -- common/autotest_common.sh@10 -- # set +x 00:10:03.318 16:27:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:03.318 16:27:34 -- common/autotest_common.sh@852 -- # return 0 00:10:03.318 16:27:34 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:03.318 Malloc0 00:10:03.318 16:27:34 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:03.885 Malloc1 00:10:03.885 16:27:35 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@12 -- # local i 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:03.885 /dev/nbd0 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:03.885 16:27:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:03.885 16:27:35 -- common/autotest_common.sh@857 -- # local i 00:10:03.885 16:27:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:03.885 16:27:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:03.885 16:27:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:03.885 16:27:35 -- common/autotest_common.sh@861 -- # break 00:10:03.885 16:27:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:03.885 16:27:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:03.885 16:27:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:03.885 1+0 records in 00:10:03.885 1+0 records out 00:10:03.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246576 s, 16.6 MB/s 00:10:03.885 16:27:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:03.885 16:27:35 -- common/autotest_common.sh@874 -- # size=4096 00:10:03.885 16:27:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:03.885 16:27:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:03.885 16:27:35 -- common/autotest_common.sh@877 -- # return 0 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:03.885 16:27:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:04.453 /dev/nbd1 00:10:04.453 16:27:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:04.453 16:27:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:04.453 16:27:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:04.453 16:27:35 -- common/autotest_common.sh@857 -- # local i 00:10:04.453 16:27:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:04.453 16:27:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:04.453 16:27:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:04.453 16:27:35 -- common/autotest_common.sh@861 -- # break 00:10:04.453 16:27:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:04.453 16:27:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:04.453 16:27:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:04.453 1+0 records in 00:10:04.453 1+0 records out 00:10:04.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377181 s, 10.9 MB/s 00:10:04.453 16:27:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:04.453 16:27:35 -- common/autotest_common.sh@874 -- # size=4096 00:10:04.453 16:27:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:04.453 16:27:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:04.453 16:27:35 -- common/autotest_common.sh@877 -- # return 0 00:10:04.453 16:27:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:04.453 16:27:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:04.453 16:27:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:04.453 16:27:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.453 16:27:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:04.712 16:27:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:04.712 { 00:10:04.712 "nbd_device": "/dev/nbd0", 00:10:04.712 "bdev_name": "Malloc0" 00:10:04.712 }, 00:10:04.712 { 00:10:04.712 "nbd_device": "/dev/nbd1", 00:10:04.712 "bdev_name": "Malloc1" 00:10:04.712 } 00:10:04.712 ]' 00:10:04.712 16:27:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:04.712 { 00:10:04.712 "nbd_device": "/dev/nbd0", 00:10:04.712 "bdev_name": "Malloc0" 00:10:04.712 }, 00:10:04.712 { 00:10:04.712 "nbd_device": "/dev/nbd1", 00:10:04.712 "bdev_name": "Malloc1" 00:10:04.712 } 00:10:04.712 ]' 00:10:04.712 16:27:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:04.712 16:27:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:04.712 /dev/nbd1' 00:10:04.712 16:27:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:04.712 /dev/nbd1' 00:10:04.712 16:27:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:04.712 16:27:35 -- bdev/nbd_common.sh@65 -- # count=2 00:10:04.712 16:27:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:10:04.712 16:27:35 -- bdev/nbd_common.sh@95 -- # count=2 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:04.712 256+0 records in 00:10:04.712 256+0 records out 00:10:04.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00882569 s, 119 MB/s 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:04.712 256+0 records in 00:10:04.712 256+0 records out 00:10:04.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279567 s, 37.5 MB/s 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:04.712 256+0 records in 00:10:04.712 256+0 records out 00:10:04.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0391505 s, 26.8 MB/s 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@51 -- # local i 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:04.712 16:27:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:04.971 16:27:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:04.971 16:27:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:04.971 16:27:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:04.971 16:27:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:04.971 16:27:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:04.971 16:27:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:04.971 16:27:36 -- bdev/nbd_common.sh@41 -- # break 00:10:04.971 16:27:36 -- bdev/nbd_common.sh@45 -- # return 0 00:10:04.971 16:27:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:04.971 16:27:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:05.230 16:27:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:05.230 16:27:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:05.230 16:27:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:05.230 16:27:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:05.230 16:27:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:05.230 16:27:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:05.230 16:27:36 -- bdev/nbd_common.sh@41 -- # break 00:10:05.230 16:27:36 -- bdev/nbd_common.sh@45 -- # return 0 00:10:05.230 16:27:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:05.230 16:27:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:05.230 16:27:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:05.489 16:27:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:05.489 16:27:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:05.489 16:27:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:05.489 16:27:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:05.489 16:27:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:05.489 16:27:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:05.489 16:27:36 -- bdev/nbd_common.sh@65 -- # true 00:10:05.489 16:27:36 -- bdev/nbd_common.sh@65 -- # count=0 00:10:05.489 16:27:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:05.489 16:27:36 -- bdev/nbd_common.sh@104 -- # count=0 00:10:05.489 16:27:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:05.489 16:27:36 -- bdev/nbd_common.sh@109 -- # return 0 00:10:05.489 16:27:36 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:06.058 16:27:37 -- event/event.sh@35 -- # sleep 3 00:10:06.318 [2024-07-13 16:27:37.566673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:06.318 [2024-07-13 16:27:37.642803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.318 [2024-07-13 16:27:37.642805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.318 [2024-07-13 16:27:37.722555] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:06.318 [2024-07-13 16:27:37.722683] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:08.849 spdk_app_start Round 2 00:10:08.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:08.849 16:27:40 -- event/event.sh@23 -- # for i in {0..2} 00:10:08.849 16:27:40 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:08.849 16:27:40 -- event/event.sh@25 -- # waitforlisten 116305 /var/tmp/spdk-nbd.sock 00:10:08.849 16:27:40 -- common/autotest_common.sh@819 -- # '[' -z 116305 ']' 00:10:08.849 16:27:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:08.849 16:27:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:08.849 16:27:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:08.849 16:27:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:08.849 16:27:40 -- common/autotest_common.sh@10 -- # set +x 00:10:09.108 16:27:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:09.108 16:27:40 -- common/autotest_common.sh@852 -- # return 0 00:10:09.108 16:27:40 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:09.366 Malloc0 00:10:09.366 16:27:40 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:09.623 Malloc1 00:10:09.623 16:27:41 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@12 -- # local i 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:09.623 16:27:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:09.881 /dev/nbd0 00:10:09.881 16:27:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:09.881 16:27:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:09.881 16:27:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:09.881 16:27:41 -- common/autotest_common.sh@857 -- # local i 00:10:09.881 16:27:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:09.881 16:27:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:09.881 16:27:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:09.881 16:27:41 -- common/autotest_common.sh@861 -- # break 00:10:09.881 16:27:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:09.881 16:27:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:09.881 16:27:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:09.881 1+0 records in 00:10:09.881 1+0 records out 00:10:09.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619944 s, 6.6 MB/s 00:10:09.881 16:27:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:09.881 16:27:41 -- common/autotest_common.sh@874 -- # size=4096 00:10:09.881 16:27:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:09.881 16:27:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:09.881 16:27:41 -- common/autotest_common.sh@877 -- # return 0 00:10:09.881 16:27:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:09.881 16:27:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:09.881 16:27:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:10.149 /dev/nbd1 00:10:10.149 16:27:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:10.149 16:27:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:10.149 16:27:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:10.149 16:27:41 -- common/autotest_common.sh@857 -- # local i 00:10:10.149 16:27:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:10.149 16:27:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:10.149 16:27:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:10.149 16:27:41 -- common/autotest_common.sh@861 -- # break 00:10:10.149 16:27:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:10.149 16:27:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:10.149 16:27:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:10.149 1+0 records in 00:10:10.149 1+0 records out 00:10:10.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680718 s, 6.0 MB/s 00:10:10.149 16:27:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:10.149 16:27:41 -- common/autotest_common.sh@874 -- # size=4096 00:10:10.149 16:27:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:10.149 16:27:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:10.149 16:27:41 -- common/autotest_common.sh@877 -- # return 0 00:10:10.149 16:27:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:10.149 16:27:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:10.149 16:27:41 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:10.149 16:27:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.149 16:27:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:10.422 16:27:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:10.422 { 00:10:10.422 "nbd_device": "/dev/nbd0", 00:10:10.422 "bdev_name": "Malloc0" 00:10:10.422 }, 00:10:10.422 { 00:10:10.422 "nbd_device": "/dev/nbd1", 00:10:10.422 "bdev_name": "Malloc1" 00:10:10.422 } 00:10:10.422 ]' 00:10:10.422 16:27:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:10.422 16:27:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:10.422 { 00:10:10.422 "nbd_device": "/dev/nbd0", 00:10:10.422 "bdev_name": "Malloc0" 00:10:10.422 }, 00:10:10.422 { 00:10:10.422 "nbd_device": "/dev/nbd1", 00:10:10.422 "bdev_name": "Malloc1" 00:10:10.422 } 00:10:10.422 ]' 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:10.681 /dev/nbd1' 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:10.681 /dev/nbd1' 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@65 -- # count=2 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@66 -- # echo 2 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@95 -- # count=2 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:10.681 256+0 records in 00:10:10.681 256+0 records out 00:10:10.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00839067 s, 125 MB/s 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:10.681 256+0 records in 00:10:10.681 256+0 records out 00:10:10.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284703 s, 36.8 MB/s 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:10.681 16:27:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:10.681 256+0 records in 00:10:10.681 256+0 records out 00:10:10.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285449 s, 36.7 MB/s 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@51 -- # local i 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:10.681 16:27:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:10.940 16:27:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:10.940 16:27:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:10.940 16:27:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:10.940 16:27:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:10.940 16:27:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:10.940 16:27:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:10.940 16:27:42 -- bdev/nbd_common.sh@41 -- # break 00:10:10.940 16:27:42 -- bdev/nbd_common.sh@45 -- # return 0 00:10:10.940 16:27:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:10.940 16:27:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:11.199 16:27:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:11.199 16:27:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:11.199 16:27:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:11.199 16:27:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:11.199 16:27:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:11.199 16:27:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:11.199 16:27:42 -- bdev/nbd_common.sh@41 -- # break 00:10:11.199 16:27:42 -- bdev/nbd_common.sh@45 -- # return 0 00:10:11.199 16:27:42 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:11.199 16:27:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:11.199 16:27:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:11.466 16:27:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:11.466 16:27:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:11.466 16:27:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:11.466 16:27:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:11.466 16:27:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:11.466 16:27:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:11.466 16:27:42 -- bdev/nbd_common.sh@65 -- # true 00:10:11.466 16:27:42 -- bdev/nbd_common.sh@65 -- # count=0 00:10:11.466 16:27:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:11.466 16:27:42 -- bdev/nbd_common.sh@104 -- # count=0 00:10:11.466 16:27:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:11.466 16:27:42 -- bdev/nbd_common.sh@109 -- # return 0 00:10:11.466 16:27:42 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:11.729 16:27:43 -- event/event.sh@35 -- # sleep 3 00:10:11.987 [2024-07-13 16:27:43.383646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:12.245 [2024-07-13 16:27:43.463220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.245 [2024-07-13 16:27:43.463220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.245 [2024-07-13 16:27:43.543757] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:12.245 [2024-07-13 16:27:43.543884] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:14.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:14.789 16:27:46 -- event/event.sh@38 -- # waitforlisten 116305 /var/tmp/spdk-nbd.sock 00:10:14.789 16:27:46 -- common/autotest_common.sh@819 -- # '[' -z 116305 ']' 00:10:14.789 16:27:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:14.789 16:27:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:14.789 16:27:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:14.789 16:27:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:14.789 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:10:15.047 16:27:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:15.047 16:27:46 -- common/autotest_common.sh@852 -- # return 0 00:10:15.047 16:27:46 -- event/event.sh@39 -- # killprocess 116305 00:10:15.047 16:27:46 -- common/autotest_common.sh@926 -- # '[' -z 116305 ']' 00:10:15.047 16:27:46 -- common/autotest_common.sh@930 -- # kill -0 116305 00:10:15.047 16:27:46 -- common/autotest_common.sh@931 -- # uname 00:10:15.047 16:27:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:15.047 16:27:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116305 00:10:15.047 16:27:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:15.047 16:27:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:15.047 16:27:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116305' 00:10:15.047 killing process with pid 116305 00:10:15.047 16:27:46 -- common/autotest_common.sh@945 -- # kill 116305 00:10:15.047 16:27:46 -- common/autotest_common.sh@950 -- # wait 116305 00:10:15.306 spdk_app_start is called in Round 0. 00:10:15.306 Shutdown signal received, stop current app iteration 00:10:15.306 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:10:15.306 spdk_app_start is called in Round 1. 00:10:15.306 Shutdown signal received, stop current app iteration 00:10:15.306 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:10:15.306 spdk_app_start is called in Round 2. 00:10:15.306 Shutdown signal received, stop current app iteration 00:10:15.306 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:10:15.306 spdk_app_start is called in Round 3. 00:10:15.306 Shutdown signal received, stop current app iteration 00:10:15.306 16:27:46 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:15.306 16:27:46 -- event/event.sh@42 -- # return 0 00:10:15.306 00:10:15.306 real 0m19.080s 00:10:15.306 user 0m41.409s 00:10:15.306 sys 0m3.802s 00:10:15.306 16:27:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.306 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:10:15.306 ************************************ 00:10:15.306 END TEST app_repeat 00:10:15.306 ************************************ 00:10:15.306 16:27:46 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:15.306 16:27:46 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:15.306 16:27:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:15.306 16:27:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:15.306 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:10:15.306 ************************************ 00:10:15.306 START TEST cpu_locks 00:10:15.306 ************************************ 00:10:15.306 16:27:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:15.565 * Looking for test storage... 00:10:15.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:15.565 16:27:46 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:15.565 16:27:46 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:15.565 16:27:46 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:15.565 16:27:46 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:15.565 16:27:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:15.565 16:27:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:15.565 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:10:15.565 ************************************ 00:10:15.565 START TEST default_locks 00:10:15.565 ************************************ 00:10:15.565 16:27:46 -- common/autotest_common.sh@1104 -- # default_locks 00:10:15.565 16:27:46 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=116819 00:10:15.565 16:27:46 -- event/cpu_locks.sh@47 -- # waitforlisten 116819 00:10:15.565 16:27:46 -- common/autotest_common.sh@819 -- # '[' -z 116819 ']' 00:10:15.565 16:27:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.565 16:27:46 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:15.565 16:27:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:15.565 16:27:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.565 16:27:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:15.565 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:10:15.565 [2024-07-13 16:27:46.938443] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:15.565 [2024-07-13 16:27:46.938715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116819 ] 00:10:15.823 [2024-07-13 16:27:47.096114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.823 [2024-07-13 16:27:47.168210] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:15.823 [2024-07-13 16:27:47.168484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.390 16:27:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:16.390 16:27:47 -- common/autotest_common.sh@852 -- # return 0 00:10:16.390 16:27:47 -- event/cpu_locks.sh@49 -- # locks_exist 116819 00:10:16.390 16:27:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:16.390 16:27:47 -- event/cpu_locks.sh@22 -- # lslocks -p 116819 00:10:16.957 16:27:48 -- event/cpu_locks.sh@50 -- # killprocess 116819 00:10:16.957 16:27:48 -- common/autotest_common.sh@926 -- # '[' -z 116819 ']' 00:10:16.957 16:27:48 -- common/autotest_common.sh@930 -- # kill -0 116819 00:10:16.957 16:27:48 -- common/autotest_common.sh@931 -- # uname 00:10:16.957 16:27:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:16.957 16:27:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116819 00:10:16.957 16:27:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:16.957 killing process with pid 116819 00:10:16.957 16:27:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:16.957 16:27:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116819' 00:10:16.957 16:27:48 -- common/autotest_common.sh@945 -- # kill 116819 00:10:16.957 16:27:48 -- common/autotest_common.sh@950 -- # wait 116819 00:10:17.526 16:27:48 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 116819 00:10:17.526 16:27:48 -- common/autotest_common.sh@640 -- # local es=0 00:10:17.526 16:27:48 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 116819 00:10:17.526 16:27:48 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:17.526 16:27:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:17.526 16:27:48 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:17.526 16:27:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:17.526 16:27:48 -- common/autotest_common.sh@643 -- # waitforlisten 116819 00:10:17.526 16:27:48 -- common/autotest_common.sh@819 -- # '[' -z 116819 ']' 00:10:17.526 16:27:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.526 16:27:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:17.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.526 16:27:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.526 16:27:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:17.526 16:27:48 -- common/autotest_common.sh@10 -- # set +x 00:10:17.526 ERROR: process (pid: 116819) is no longer running 00:10:17.526 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (116819) - No such process 00:10:17.526 16:27:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:17.526 16:27:48 -- common/autotest_common.sh@852 -- # return 1 00:10:17.526 16:27:48 -- common/autotest_common.sh@643 -- # es=1 00:10:17.526 16:27:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:17.526 16:27:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:17.526 16:27:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:17.526 16:27:48 -- event/cpu_locks.sh@54 -- # no_locks 00:10:17.526 16:27:48 -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:17.526 16:27:48 -- event/cpu_locks.sh@26 -- # local lock_files 00:10:17.526 16:27:48 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:17.526 00:10:17.526 real 0m2.014s 00:10:17.526 user 0m1.872s 00:10:17.526 sys 0m0.814s 00:10:17.526 16:27:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.526 ************************************ 00:10:17.526 END TEST default_locks 00:10:17.526 16:27:48 -- common/autotest_common.sh@10 -- # set +x 00:10:17.526 ************************************ 00:10:17.526 16:27:48 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:17.526 16:27:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:17.526 16:27:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:17.526 16:27:48 -- common/autotest_common.sh@10 -- # set +x 00:10:17.526 ************************************ 00:10:17.526 START TEST default_locks_via_rpc 00:10:17.526 ************************************ 00:10:17.526 16:27:48 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:10:17.526 16:27:48 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=116880 00:10:17.526 16:27:48 -- event/cpu_locks.sh@63 -- # waitforlisten 116880 00:10:17.526 16:27:48 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:17.526 16:27:48 -- common/autotest_common.sh@819 -- # '[' -z 116880 ']' 00:10:17.526 16:27:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.526 16:27:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:17.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.526 16:27:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.526 16:27:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:17.526 16:27:48 -- common/autotest_common.sh@10 -- # set +x 00:10:17.785 [2024-07-13 16:27:49.014177] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:17.785 [2024-07-13 16:27:49.014456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116880 ] 00:10:17.785 [2024-07-13 16:27:49.171810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.785 [2024-07-13 16:27:49.254443] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:17.785 [2024-07-13 16:27:49.254696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.721 16:27:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:18.721 16:27:49 -- common/autotest_common.sh@852 -- # return 0 00:10:18.721 16:27:49 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:18.721 16:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.721 16:27:49 -- common/autotest_common.sh@10 -- # set +x 00:10:18.721 16:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.721 16:27:49 -- event/cpu_locks.sh@67 -- # no_locks 00:10:18.721 16:27:49 -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:18.721 16:27:49 -- event/cpu_locks.sh@26 -- # local lock_files 00:10:18.721 16:27:49 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:18.721 16:27:49 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:18.721 16:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.721 16:27:49 -- common/autotest_common.sh@10 -- # set +x 00:10:18.721 16:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.721 16:27:49 -- event/cpu_locks.sh@71 -- # locks_exist 116880 00:10:18.721 16:27:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:18.721 16:27:49 -- event/cpu_locks.sh@22 -- # lslocks -p 116880 00:10:18.980 16:27:50 -- event/cpu_locks.sh@73 -- # killprocess 116880 00:10:18.980 16:27:50 -- common/autotest_common.sh@926 -- # '[' -z 116880 ']' 00:10:18.980 16:27:50 -- common/autotest_common.sh@930 -- # kill -0 116880 00:10:18.980 16:27:50 -- common/autotest_common.sh@931 -- # uname 00:10:18.980 16:27:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:18.980 16:27:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116880 00:10:18.980 16:27:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:18.980 16:27:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:18.980 killing process with pid 116880 00:10:18.980 16:27:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116880' 00:10:18.980 16:27:50 -- common/autotest_common.sh@945 -- # kill 116880 00:10:18.980 16:27:50 -- common/autotest_common.sh@950 -- # wait 116880 00:10:19.613 00:10:19.613 real 0m2.112s 00:10:19.613 user 0m2.015s 00:10:19.613 sys 0m0.852s 00:10:19.613 16:27:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.613 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:10:19.613 ************************************ 00:10:19.613 END TEST default_locks_via_rpc 00:10:19.613 ************************************ 00:10:19.872 16:27:51 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:19.872 16:27:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:19.872 16:27:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:19.872 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:10:19.872 ************************************ 00:10:19.872 START TEST non_locking_app_on_locked_coremask 00:10:19.872 ************************************ 00:10:19.872 16:27:51 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:10:19.872 16:27:51 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=116943 00:10:19.872 16:27:51 -- event/cpu_locks.sh@81 -- # waitforlisten 116943 /var/tmp/spdk.sock 00:10:19.872 16:27:51 -- common/autotest_common.sh@819 -- # '[' -z 116943 ']' 00:10:19.872 16:27:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.872 16:27:51 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:19.872 16:27:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:19.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.872 16:27:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.872 16:27:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:19.872 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:10:19.872 [2024-07-13 16:27:51.197096] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:19.872 [2024-07-13 16:27:51.197392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116943 ] 00:10:20.130 [2024-07-13 16:27:51.356215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.130 [2024-07-13 16:27:51.442497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:20.130 [2024-07-13 16:27:51.442766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.064 16:27:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:21.064 16:27:52 -- common/autotest_common.sh@852 -- # return 0 00:10:21.064 16:27:52 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=116964 00:10:21.064 16:27:52 -- event/cpu_locks.sh@85 -- # waitforlisten 116964 /var/tmp/spdk2.sock 00:10:21.064 16:27:52 -- common/autotest_common.sh@819 -- # '[' -z 116964 ']' 00:10:21.064 16:27:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:21.064 16:27:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:21.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:21.064 16:27:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:21.064 16:27:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:21.064 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:10:21.064 16:27:52 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:21.064 [2024-07-13 16:27:52.259965] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:21.064 [2024-07-13 16:27:52.260247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116964 ] 00:10:21.064 [2024-07-13 16:27:52.410063] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:21.064 [2024-07-13 16:27:52.410169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.322 [2024-07-13 16:27:52.609348] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:21.322 [2024-07-13 16:27:52.609630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.701 16:27:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:22.701 16:27:53 -- common/autotest_common.sh@852 -- # return 0 00:10:22.701 16:27:53 -- event/cpu_locks.sh@87 -- # locks_exist 116943 00:10:22.701 16:27:53 -- event/cpu_locks.sh@22 -- # lslocks -p 116943 00:10:22.701 16:27:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:23.267 16:27:54 -- event/cpu_locks.sh@89 -- # killprocess 116943 00:10:23.267 16:27:54 -- common/autotest_common.sh@926 -- # '[' -z 116943 ']' 00:10:23.267 16:27:54 -- common/autotest_common.sh@930 -- # kill -0 116943 00:10:23.267 16:27:54 -- common/autotest_common.sh@931 -- # uname 00:10:23.267 16:27:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:23.267 16:27:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116943 00:10:23.267 16:27:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:23.267 16:27:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:23.267 16:27:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116943' 00:10:23.267 killing process with pid 116943 00:10:23.267 16:27:54 -- common/autotest_common.sh@945 -- # kill 116943 00:10:23.267 16:27:54 -- common/autotest_common.sh@950 -- # wait 116943 00:10:24.643 16:27:55 -- event/cpu_locks.sh@90 -- # killprocess 116964 00:10:24.643 16:27:55 -- common/autotest_common.sh@926 -- # '[' -z 116964 ']' 00:10:24.643 16:27:55 -- common/autotest_common.sh@930 -- # kill -0 116964 00:10:24.643 16:27:55 -- common/autotest_common.sh@931 -- # uname 00:10:24.643 16:27:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:24.643 16:27:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116964 00:10:24.643 16:27:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:24.643 16:27:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:24.643 killing process with pid 116964 00:10:24.643 16:27:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116964' 00:10:24.643 16:27:55 -- common/autotest_common.sh@945 -- # kill 116964 00:10:24.643 16:27:55 -- common/autotest_common.sh@950 -- # wait 116964 00:10:25.579 00:10:25.579 real 0m5.574s 00:10:25.579 user 0m5.856s 00:10:25.579 sys 0m1.665s 00:10:25.579 ************************************ 00:10:25.579 END TEST non_locking_app_on_locked_coremask 00:10:25.579 ************************************ 00:10:25.579 16:27:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.579 16:27:56 -- common/autotest_common.sh@10 -- # set +x 00:10:25.579 16:27:56 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:25.579 16:27:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:25.579 16:27:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.579 16:27:56 -- common/autotest_common.sh@10 -- # set +x 00:10:25.579 ************************************ 00:10:25.579 START TEST locking_app_on_unlocked_coremask 00:10:25.579 ************************************ 00:10:25.579 16:27:56 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:10:25.579 16:27:56 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=117052 00:10:25.579 16:27:56 -- event/cpu_locks.sh@99 -- # waitforlisten 117052 /var/tmp/spdk.sock 00:10:25.579 16:27:56 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:25.579 16:27:56 -- common/autotest_common.sh@819 -- # '[' -z 117052 ']' 00:10:25.579 16:27:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.579 16:27:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:25.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.579 16:27:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.579 16:27:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:25.579 16:27:56 -- common/autotest_common.sh@10 -- # set +x 00:10:25.579 [2024-07-13 16:27:56.839067] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:25.579 [2024-07-13 16:27:56.839949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117052 ] 00:10:25.579 [2024-07-13 16:27:56.996360] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:25.579 [2024-07-13 16:27:56.996454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.838 [2024-07-13 16:27:57.082617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:25.838 [2024-07-13 16:27:57.082863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.405 16:27:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:26.405 16:27:57 -- common/autotest_common.sh@852 -- # return 0 00:10:26.405 16:27:57 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=117073 00:10:26.405 16:27:57 -- event/cpu_locks.sh@103 -- # waitforlisten 117073 /var/tmp/spdk2.sock 00:10:26.405 16:27:57 -- common/autotest_common.sh@819 -- # '[' -z 117073 ']' 00:10:26.405 16:27:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:26.405 16:27:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:26.405 16:27:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:26.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:26.405 16:27:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:26.405 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:10:26.405 16:27:57 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:26.663 [2024-07-13 16:27:57.890285] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:26.663 [2024-07-13 16:27:57.890546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117073 ] 00:10:26.663 [2024-07-13 16:27:58.040949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.921 [2024-07-13 16:27:58.221506] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:26.921 [2024-07-13 16:27:58.221738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.296 16:27:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:28.296 16:27:59 -- common/autotest_common.sh@852 -- # return 0 00:10:28.296 16:27:59 -- event/cpu_locks.sh@105 -- # locks_exist 117073 00:10:28.296 16:27:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:28.296 16:27:59 -- event/cpu_locks.sh@22 -- # lslocks -p 117073 00:10:28.862 16:28:00 -- event/cpu_locks.sh@107 -- # killprocess 117052 00:10:28.862 16:28:00 -- common/autotest_common.sh@926 -- # '[' -z 117052 ']' 00:10:28.862 16:28:00 -- common/autotest_common.sh@930 -- # kill -0 117052 00:10:28.862 16:28:00 -- common/autotest_common.sh@931 -- # uname 00:10:28.862 16:28:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:28.862 16:28:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117052 00:10:28.862 16:28:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:28.862 killing process with pid 117052 00:10:28.862 16:28:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:28.862 16:28:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117052' 00:10:28.862 16:28:00 -- common/autotest_common.sh@945 -- # kill 117052 00:10:28.862 16:28:00 -- common/autotest_common.sh@950 -- # wait 117052 00:10:30.250 16:28:01 -- event/cpu_locks.sh@108 -- # killprocess 117073 00:10:30.250 16:28:01 -- common/autotest_common.sh@926 -- # '[' -z 117073 ']' 00:10:30.250 16:28:01 -- common/autotest_common.sh@930 -- # kill -0 117073 00:10:30.250 16:28:01 -- common/autotest_common.sh@931 -- # uname 00:10:30.250 16:28:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:30.250 16:28:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117073 00:10:30.250 16:28:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:30.250 16:28:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:30.250 killing process with pid 117073 00:10:30.250 16:28:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117073' 00:10:30.250 16:28:01 -- common/autotest_common.sh@945 -- # kill 117073 00:10:30.250 16:28:01 -- common/autotest_common.sh@950 -- # wait 117073 00:10:30.826 00:10:30.826 real 0m5.318s 00:10:30.826 user 0m5.465s 00:10:30.826 sys 0m1.587s 00:10:30.826 16:28:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.826 ************************************ 00:10:30.826 END TEST locking_app_on_unlocked_coremask 00:10:30.826 ************************************ 00:10:30.826 16:28:02 -- common/autotest_common.sh@10 -- # set +x 00:10:30.826 16:28:02 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:30.826 16:28:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:30.826 16:28:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:30.826 16:28:02 -- common/autotest_common.sh@10 -- # set +x 00:10:30.826 ************************************ 00:10:30.826 START TEST locking_app_on_locked_coremask 00:10:30.826 ************************************ 00:10:30.826 16:28:02 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:10:30.826 16:28:02 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=117170 00:10:30.826 16:28:02 -- event/cpu_locks.sh@116 -- # waitforlisten 117170 /var/tmp/spdk.sock 00:10:30.826 16:28:02 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:30.826 16:28:02 -- common/autotest_common.sh@819 -- # '[' -z 117170 ']' 00:10:30.826 16:28:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.826 16:28:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:30.826 16:28:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.826 16:28:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:30.826 16:28:02 -- common/autotest_common.sh@10 -- # set +x 00:10:30.826 [2024-07-13 16:28:02.217603] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:30.826 [2024-07-13 16:28:02.218114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117170 ] 00:10:31.084 [2024-07-13 16:28:02.375302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.084 [2024-07-13 16:28:02.460288] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:31.084 [2024-07-13 16:28:02.460545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.648 16:28:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:31.648 16:28:03 -- common/autotest_common.sh@852 -- # return 0 00:10:31.648 16:28:03 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=117191 00:10:31.648 16:28:03 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 117191 /var/tmp/spdk2.sock 00:10:31.648 16:28:03 -- common/autotest_common.sh@640 -- # local es=0 00:10:31.648 16:28:03 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 117191 /var/tmp/spdk2.sock 00:10:31.648 16:28:03 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:31.648 16:28:03 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:31.648 16:28:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:31.648 16:28:03 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:31.648 16:28:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:31.648 16:28:03 -- common/autotest_common.sh@643 -- # waitforlisten 117191 /var/tmp/spdk2.sock 00:10:31.648 16:28:03 -- common/autotest_common.sh@819 -- # '[' -z 117191 ']' 00:10:31.648 16:28:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:31.648 16:28:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:31.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:31.648 16:28:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:31.648 16:28:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:31.648 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:10:31.905 [2024-07-13 16:28:03.170162] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:31.905 [2024-07-13 16:28:03.170442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117191 ] 00:10:31.905 [2024-07-13 16:28:03.337083] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 117170 has claimed it. 00:10:31.905 [2024-07-13 16:28:03.337206] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:32.469 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (117191) - No such process 00:10:32.469 ERROR: process (pid: 117191) is no longer running 00:10:32.469 16:28:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:32.469 16:28:03 -- common/autotest_common.sh@852 -- # return 1 00:10:32.469 16:28:03 -- common/autotest_common.sh@643 -- # es=1 00:10:32.469 16:28:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:32.469 16:28:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:32.469 16:28:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:32.469 16:28:03 -- event/cpu_locks.sh@122 -- # locks_exist 117170 00:10:32.469 16:28:03 -- event/cpu_locks.sh@22 -- # lslocks -p 117170 00:10:32.469 16:28:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:32.727 16:28:04 -- event/cpu_locks.sh@124 -- # killprocess 117170 00:10:32.727 16:28:04 -- common/autotest_common.sh@926 -- # '[' -z 117170 ']' 00:10:32.727 16:28:04 -- common/autotest_common.sh@930 -- # kill -0 117170 00:10:32.727 16:28:04 -- common/autotest_common.sh@931 -- # uname 00:10:32.727 16:28:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:32.727 16:28:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117170 00:10:32.727 16:28:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:32.727 16:28:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:32.727 16:28:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117170' 00:10:32.727 killing process with pid 117170 00:10:32.727 16:28:04 -- common/autotest_common.sh@945 -- # kill 117170 00:10:32.727 16:28:04 -- common/autotest_common.sh@950 -- # wait 117170 00:10:33.293 00:10:33.293 real 0m2.571s 00:10:33.293 user 0m2.573s 00:10:33.293 sys 0m0.856s 00:10:33.293 16:28:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.293 ************************************ 00:10:33.293 END TEST locking_app_on_locked_coremask 00:10:33.293 ************************************ 00:10:33.293 16:28:04 -- common/autotest_common.sh@10 -- # set +x 00:10:33.293 16:28:04 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:33.293 16:28:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:33.293 16:28:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:33.293 16:28:04 -- common/autotest_common.sh@10 -- # set +x 00:10:33.559 ************************************ 00:10:33.559 START TEST locking_overlapped_coremask 00:10:33.559 ************************************ 00:10:33.559 16:28:04 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:10:33.559 16:28:04 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=117236 00:10:33.559 16:28:04 -- event/cpu_locks.sh@133 -- # waitforlisten 117236 /var/tmp/spdk.sock 00:10:33.559 16:28:04 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:33.559 16:28:04 -- common/autotest_common.sh@819 -- # '[' -z 117236 ']' 00:10:33.559 16:28:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.559 16:28:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:33.559 16:28:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.559 16:28:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:33.559 16:28:04 -- common/autotest_common.sh@10 -- # set +x 00:10:33.559 [2024-07-13 16:28:04.848614] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:33.559 [2024-07-13 16:28:04.849177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117236 ] 00:10:33.559 [2024-07-13 16:28:05.013262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:33.817 [2024-07-13 16:28:05.103615] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:33.817 [2024-07-13 16:28:05.104111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.817 [2024-07-13 16:28:05.104074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.817 [2024-07-13 16:28:05.104110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.380 16:28:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:34.380 16:28:05 -- common/autotest_common.sh@852 -- # return 0 00:10:34.380 16:28:05 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=117259 00:10:34.380 16:28:05 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 117259 /var/tmp/spdk2.sock 00:10:34.380 16:28:05 -- common/autotest_common.sh@640 -- # local es=0 00:10:34.380 16:28:05 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 117259 /var/tmp/spdk2.sock 00:10:34.380 16:28:05 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:34.380 16:28:05 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:34.380 16:28:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:34.380 16:28:05 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:34.380 16:28:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:34.380 16:28:05 -- common/autotest_common.sh@643 -- # waitforlisten 117259 /var/tmp/spdk2.sock 00:10:34.380 16:28:05 -- common/autotest_common.sh@819 -- # '[' -z 117259 ']' 00:10:34.380 16:28:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:34.380 16:28:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:34.380 16:28:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:34.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:34.380 16:28:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:34.380 16:28:05 -- common/autotest_common.sh@10 -- # set +x 00:10:34.380 [2024-07-13 16:28:05.805016] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:34.380 [2024-07-13 16:28:05.805226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117259 ] 00:10:34.637 [2024-07-13 16:28:05.985227] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117236 has claimed it. 00:10:34.637 [2024-07-13 16:28:05.985573] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:35.202 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (117259) - No such process 00:10:35.202 ERROR: process (pid: 117259) is no longer running 00:10:35.202 16:28:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:35.202 16:28:06 -- common/autotest_common.sh@852 -- # return 1 00:10:35.202 16:28:06 -- common/autotest_common.sh@643 -- # es=1 00:10:35.202 16:28:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:35.202 16:28:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:35.202 16:28:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:35.202 16:28:06 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:35.202 16:28:06 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:35.202 16:28:06 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:35.202 16:28:06 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:35.202 16:28:06 -- event/cpu_locks.sh@141 -- # killprocess 117236 00:10:35.202 16:28:06 -- common/autotest_common.sh@926 -- # '[' -z 117236 ']' 00:10:35.202 16:28:06 -- common/autotest_common.sh@930 -- # kill -0 117236 00:10:35.202 16:28:06 -- common/autotest_common.sh@931 -- # uname 00:10:35.202 16:28:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:35.202 16:28:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117236 00:10:35.202 16:28:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:35.202 16:28:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:35.202 16:28:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117236' 00:10:35.202 killing process with pid 117236 00:10:35.202 16:28:06 -- common/autotest_common.sh@945 -- # kill 117236 00:10:35.202 16:28:06 -- common/autotest_common.sh@950 -- # wait 117236 00:10:36.144 00:10:36.144 real 0m2.481s 00:10:36.144 user 0m6.352s 00:10:36.144 sys 0m0.723s 00:10:36.144 16:28:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.144 ************************************ 00:10:36.144 END TEST locking_overlapped_coremask 00:10:36.144 ************************************ 00:10:36.144 16:28:07 -- common/autotest_common.sh@10 -- # set +x 00:10:36.144 16:28:07 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:36.144 16:28:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:36.144 16:28:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:36.144 16:28:07 -- common/autotest_common.sh@10 -- # set +x 00:10:36.144 ************************************ 00:10:36.144 START TEST locking_overlapped_coremask_via_rpc 00:10:36.144 ************************************ 00:10:36.144 16:28:07 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:10:36.144 16:28:07 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=117316 00:10:36.144 16:28:07 -- event/cpu_locks.sh@149 -- # waitforlisten 117316 /var/tmp/spdk.sock 00:10:36.144 16:28:07 -- common/autotest_common.sh@819 -- # '[' -z 117316 ']' 00:10:36.144 16:28:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.144 16:28:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:36.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.144 16:28:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.144 16:28:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:36.144 16:28:07 -- common/autotest_common.sh@10 -- # set +x 00:10:36.144 16:28:07 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:36.144 [2024-07-13 16:28:07.387387] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:36.144 [2024-07-13 16:28:07.387810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117316 ] 00:10:36.144 [2024-07-13 16:28:07.541351] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:36.144 [2024-07-13 16:28:07.541435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:36.401 [2024-07-13 16:28:07.617166] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:36.401 [2024-07-13 16:28:07.617665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.401 [2024-07-13 16:28:07.617755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.401 [2024-07-13 16:28:07.617759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.965 16:28:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:36.965 16:28:08 -- common/autotest_common.sh@852 -- # return 0 00:10:36.965 16:28:08 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=117339 00:10:36.965 16:28:08 -- event/cpu_locks.sh@153 -- # waitforlisten 117339 /var/tmp/spdk2.sock 00:10:36.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:36.965 16:28:08 -- common/autotest_common.sh@819 -- # '[' -z 117339 ']' 00:10:36.965 16:28:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:36.965 16:28:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:36.965 16:28:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:36.965 16:28:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:36.965 16:28:08 -- common/autotest_common.sh@10 -- # set +x 00:10:36.965 16:28:08 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:36.965 [2024-07-13 16:28:08.425589] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:36.965 [2024-07-13 16:28:08.425924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117339 ] 00:10:37.223 [2024-07-13 16:28:08.597955] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:37.223 [2024-07-13 16:28:08.616355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:37.480 [2024-07-13 16:28:08.753915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:37.480 [2024-07-13 16:28:08.768665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.480 [2024-07-13 16:28:08.768766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.480 [2024-07-13 16:28:08.768774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:38.853 16:28:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:38.853 16:28:09 -- common/autotest_common.sh@852 -- # return 0 00:10:38.853 16:28:09 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:38.853 16:28:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.853 16:28:09 -- common/autotest_common.sh@10 -- # set +x 00:10:38.853 16:28:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.853 16:28:09 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:38.853 16:28:09 -- common/autotest_common.sh@640 -- # local es=0 00:10:38.853 16:28:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:38.853 16:28:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:10:38.853 16:28:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:38.853 16:28:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:10:38.853 16:28:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:38.853 16:28:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:38.853 16:28:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.853 16:28:09 -- common/autotest_common.sh@10 -- # set +x 00:10:38.853 [2024-07-13 16:28:09.981832] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117316 has claimed it. 00:10:38.853 request: 00:10:38.853 { 00:10:38.853 "method": "framework_enable_cpumask_locks", 00:10:38.853 "req_id": 1 00:10:38.853 } 00:10:38.853 Got JSON-RPC error response 00:10:38.853 response: 00:10:38.853 { 00:10:38.853 "code": -32603, 00:10:38.853 "message": "Failed to claim CPU core: 2" 00:10:38.853 } 00:10:38.853 16:28:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:10:38.853 16:28:09 -- common/autotest_common.sh@643 -- # es=1 00:10:38.854 16:28:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:38.854 16:28:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:38.854 16:28:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:38.854 16:28:09 -- event/cpu_locks.sh@158 -- # waitforlisten 117316 /var/tmp/spdk.sock 00:10:38.854 16:28:09 -- common/autotest_common.sh@819 -- # '[' -z 117316 ']' 00:10:38.854 16:28:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.854 16:28:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:38.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.854 16:28:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.854 16:28:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:38.854 16:28:09 -- common/autotest_common.sh@10 -- # set +x 00:10:38.854 16:28:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:38.854 16:28:10 -- common/autotest_common.sh@852 -- # return 0 00:10:38.854 16:28:10 -- event/cpu_locks.sh@159 -- # waitforlisten 117339 /var/tmp/spdk2.sock 00:10:38.854 16:28:10 -- common/autotest_common.sh@819 -- # '[' -z 117339 ']' 00:10:38.854 16:28:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:38.854 16:28:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:38.854 16:28:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:38.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:38.854 16:28:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:38.854 16:28:10 -- common/autotest_common.sh@10 -- # set +x 00:10:39.113 16:28:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:39.113 16:28:10 -- common/autotest_common.sh@852 -- # return 0 00:10:39.113 16:28:10 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:39.113 16:28:10 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:39.113 16:28:10 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:39.113 16:28:10 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:39.113 00:10:39.113 real 0m3.099s 00:10:39.113 user 0m1.281s 00:10:39.113 sys 0m0.264s 00:10:39.113 16:28:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.113 16:28:10 -- common/autotest_common.sh@10 -- # set +x 00:10:39.113 ************************************ 00:10:39.113 END TEST locking_overlapped_coremask_via_rpc 00:10:39.113 ************************************ 00:10:39.113 16:28:10 -- event/cpu_locks.sh@174 -- # cleanup 00:10:39.113 16:28:10 -- event/cpu_locks.sh@15 -- # [[ -z 117316 ]] 00:10:39.113 16:28:10 -- event/cpu_locks.sh@15 -- # killprocess 117316 00:10:39.113 16:28:10 -- common/autotest_common.sh@926 -- # '[' -z 117316 ']' 00:10:39.113 16:28:10 -- common/autotest_common.sh@930 -- # kill -0 117316 00:10:39.113 16:28:10 -- common/autotest_common.sh@931 -- # uname 00:10:39.113 16:28:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:39.113 16:28:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117316 00:10:39.113 16:28:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:39.113 16:28:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:39.113 16:28:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117316' 00:10:39.113 killing process with pid 117316 00:10:39.113 16:28:10 -- common/autotest_common.sh@945 -- # kill 117316 00:10:39.113 16:28:10 -- common/autotest_common.sh@950 -- # wait 117316 00:10:40.113 16:28:11 -- event/cpu_locks.sh@16 -- # [[ -z 117339 ]] 00:10:40.113 16:28:11 -- event/cpu_locks.sh@16 -- # killprocess 117339 00:10:40.113 16:28:11 -- common/autotest_common.sh@926 -- # '[' -z 117339 ']' 00:10:40.113 16:28:11 -- common/autotest_common.sh@930 -- # kill -0 117339 00:10:40.113 16:28:11 -- common/autotest_common.sh@931 -- # uname 00:10:40.113 16:28:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:40.113 16:28:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117339 00:10:40.113 16:28:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:40.113 16:28:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:40.113 16:28:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117339' 00:10:40.113 killing process with pid 117339 00:10:40.113 16:28:11 -- common/autotest_common.sh@945 -- # kill 117339 00:10:40.113 16:28:11 -- common/autotest_common.sh@950 -- # wait 117339 00:10:40.700 16:28:11 -- event/cpu_locks.sh@18 -- # rm -f 00:10:40.700 16:28:11 -- event/cpu_locks.sh@1 -- # cleanup 00:10:40.700 16:28:11 -- event/cpu_locks.sh@15 -- # [[ -z 117316 ]] 00:10:40.700 16:28:11 -- event/cpu_locks.sh@15 -- # killprocess 117316 00:10:40.700 16:28:11 -- common/autotest_common.sh@926 -- # '[' -z 117316 ']' 00:10:40.700 16:28:11 -- common/autotest_common.sh@930 -- # kill -0 117316 00:10:40.700 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (117316) - No such process 00:10:40.700 Process with pid 117316 is not found 00:10:40.700 16:28:11 -- common/autotest_common.sh@953 -- # echo 'Process with pid 117316 is not found' 00:10:40.700 16:28:11 -- event/cpu_locks.sh@16 -- # [[ -z 117339 ]] 00:10:40.700 16:28:11 -- event/cpu_locks.sh@16 -- # killprocess 117339 00:10:40.700 16:28:11 -- common/autotest_common.sh@926 -- # '[' -z 117339 ']' 00:10:40.700 16:28:11 -- common/autotest_common.sh@930 -- # kill -0 117339 00:10:40.700 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (117339) - No such process 00:10:40.700 Process with pid 117339 is not found 00:10:40.700 16:28:11 -- common/autotest_common.sh@953 -- # echo 'Process with pid 117339 is not found' 00:10:40.700 16:28:11 -- event/cpu_locks.sh@18 -- # rm -f 00:10:40.700 00:10:40.700 real 0m25.196s 00:10:40.700 user 0m42.137s 00:10:40.700 sys 0m8.108s 00:10:40.700 16:28:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.700 16:28:11 -- common/autotest_common.sh@10 -- # set +x 00:10:40.700 ************************************ 00:10:40.700 END TEST cpu_locks 00:10:40.700 ************************************ 00:10:40.700 00:10:40.700 real 0m53.563s 00:10:40.700 user 1m38.091s 00:10:40.700 sys 0m13.227s 00:10:40.700 16:28:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.700 16:28:11 -- common/autotest_common.sh@10 -- # set +x 00:10:40.700 ************************************ 00:10:40.700 END TEST event 00:10:40.700 ************************************ 00:10:40.700 16:28:12 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:40.700 16:28:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:40.700 16:28:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:40.700 16:28:12 -- common/autotest_common.sh@10 -- # set +x 00:10:40.700 ************************************ 00:10:40.700 START TEST thread 00:10:40.700 ************************************ 00:10:40.700 16:28:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:40.700 * Looking for test storage... 00:10:40.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:40.700 16:28:12 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:40.700 16:28:12 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:40.700 16:28:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:40.700 16:28:12 -- common/autotest_common.sh@10 -- # set +x 00:10:40.958 ************************************ 00:10:40.958 START TEST thread_poller_perf 00:10:40.958 ************************************ 00:10:40.958 16:28:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:40.958 [2024-07-13 16:28:12.206218] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:40.958 [2024-07-13 16:28:12.206532] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117481 ] 00:10:40.958 [2024-07-13 16:28:12.367755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.215 [2024-07-13 16:28:12.445520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.215 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:42.589 ====================================== 00:10:42.589 busy:2114429446 (cyc) 00:10:42.589 total_run_count: 352000 00:10:42.589 tsc_hz: 2100000000 (cyc) 00:10:42.589 ====================================== 00:10:42.589 poller_cost: 6006 (cyc), 2860 (nsec) 00:10:42.589 00:10:42.589 real 0m1.477s 00:10:42.589 user 0m1.249s 00:10:42.589 sys 0m0.127s 00:10:42.589 16:28:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.589 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:10:42.589 ************************************ 00:10:42.589 END TEST thread_poller_perf 00:10:42.589 ************************************ 00:10:42.589 16:28:13 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:42.589 16:28:13 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:42.589 16:28:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:42.589 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:10:42.589 ************************************ 00:10:42.589 START TEST thread_poller_perf 00:10:42.589 ************************************ 00:10:42.589 16:28:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:42.589 [2024-07-13 16:28:13.736523] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:42.589 [2024-07-13 16:28:13.736819] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117531 ] 00:10:42.589 [2024-07-13 16:28:13.891121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.589 [2024-07-13 16:28:13.960040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.589 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:43.970 ====================================== 00:10:43.970 busy:2104953784 (cyc) 00:10:43.970 total_run_count: 4988000 00:10:43.970 tsc_hz: 2100000000 (cyc) 00:10:43.970 ====================================== 00:10:43.970 poller_cost: 422 (cyc), 200 (nsec) 00:10:43.970 00:10:43.970 real 0m1.446s 00:10:43.970 user 0m1.232s 00:10:43.970 sys 0m0.113s 00:10:43.970 16:28:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.970 16:28:15 -- common/autotest_common.sh@10 -- # set +x 00:10:43.970 ************************************ 00:10:43.970 END TEST thread_poller_perf 00:10:43.970 ************************************ 00:10:43.970 16:28:15 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:10:43.970 16:28:15 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:43.970 16:28:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:43.970 16:28:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:43.970 16:28:15 -- common/autotest_common.sh@10 -- # set +x 00:10:43.970 ************************************ 00:10:43.970 START TEST thread_spdk_lock 00:10:43.970 ************************************ 00:10:43.970 16:28:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:43.970 [2024-07-13 16:28:15.248375] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:43.970 [2024-07-13 16:28:15.248659] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117567 ] 00:10:43.970 [2024-07-13 16:28:15.406663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:44.227 [2024-07-13 16:28:15.480374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.227 [2024-07-13 16:28:15.480374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.792 [2024-07-13 16:28:15.994508] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:44.792 [2024-07-13 16:28:15.994659] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:10:44.792 [2024-07-13 16:28:15.994707] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x55c58237e980 00:10:44.792 [2024-07-13 16:28:15.996215] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:44.792 [2024-07-13 16:28:15.996319] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:44.792 [2024-07-13 16:28:15.996375] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:44.792 Starting test contend 00:10:44.792 Worker Delay Wait us Hold us Total us 00:10:44.792 0 3 142129 192148 334277 00:10:44.792 1 5 62041 294743 356784 00:10:44.792 PASS test contend 00:10:44.792 Starting test hold_by_poller 00:10:44.792 PASS test hold_by_poller 00:10:44.792 Starting test hold_by_message 00:10:44.792 PASS test hold_by_message 00:10:44.792 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:10:44.792 100014 assertions passed 00:10:44.792 0 assertions failed 00:10:44.792 00:10:44.792 real 0m0.961s 00:10:44.792 user 0m1.262s 00:10:44.792 sys 0m0.116s 00:10:44.792 16:28:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.792 16:28:16 -- common/autotest_common.sh@10 -- # set +x 00:10:44.792 ************************************ 00:10:44.792 END TEST thread_spdk_lock 00:10:44.792 ************************************ 00:10:44.792 00:10:44.792 real 0m4.183s 00:10:44.792 user 0m3.896s 00:10:44.792 sys 0m0.509s 00:10:44.792 16:28:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.792 16:28:16 -- common/autotest_common.sh@10 -- # set +x 00:10:44.793 ************************************ 00:10:44.793 END TEST thread 00:10:44.793 ************************************ 00:10:45.052 16:28:16 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:45.052 16:28:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:45.052 16:28:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:45.052 16:28:16 -- common/autotest_common.sh@10 -- # set +x 00:10:45.052 ************************************ 00:10:45.052 START TEST accel 00:10:45.052 ************************************ 00:10:45.052 16:28:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:45.052 * Looking for test storage... 00:10:45.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:45.052 16:28:16 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:10:45.052 16:28:16 -- accel/accel.sh@74 -- # get_expected_opcs 00:10:45.052 16:28:16 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:45.052 16:28:16 -- accel/accel.sh@59 -- # spdk_tgt_pid=117654 00:10:45.052 16:28:16 -- accel/accel.sh@60 -- # waitforlisten 117654 00:10:45.052 16:28:16 -- common/autotest_common.sh@819 -- # '[' -z 117654 ']' 00:10:45.052 16:28:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.052 16:28:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:45.052 16:28:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.052 16:28:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:45.052 16:28:16 -- common/autotest_common.sh@10 -- # set +x 00:10:45.052 16:28:16 -- accel/accel.sh@58 -- # build_accel_config 00:10:45.052 16:28:16 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:45.052 16:28:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.052 16:28:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.052 16:28:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.052 16:28:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.052 16:28:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.052 16:28:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.052 16:28:16 -- accel/accel.sh@42 -- # jq -r . 00:10:45.052 [2024-07-13 16:28:16.471841] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:45.052 [2024-07-13 16:28:16.472124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117654 ] 00:10:45.310 [2024-07-13 16:28:16.626755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.310 [2024-07-13 16:28:16.700440] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:45.310 [2024-07-13 16:28:16.700691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.242 16:28:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:46.242 16:28:17 -- common/autotest_common.sh@852 -- # return 0 00:10:46.242 16:28:17 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:46.242 16:28:17 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:46.242 16:28:17 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:10:46.242 16:28:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:46.242 16:28:17 -- common/autotest_common.sh@10 -- # set +x 00:10:46.242 16:28:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # IFS== 00:10:46.242 16:28:17 -- accel/accel.sh@64 -- # read -r opc module 00:10:46.242 16:28:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:46.242 16:28:17 -- accel/accel.sh@67 -- # killprocess 117654 00:10:46.242 16:28:17 -- common/autotest_common.sh@926 -- # '[' -z 117654 ']' 00:10:46.242 16:28:17 -- common/autotest_common.sh@930 -- # kill -0 117654 00:10:46.242 16:28:17 -- common/autotest_common.sh@931 -- # uname 00:10:46.242 16:28:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:46.242 16:28:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117654 00:10:46.242 16:28:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:46.242 16:28:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:46.242 killing process with pid 117654 00:10:46.242 16:28:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117654' 00:10:46.242 16:28:17 -- common/autotest_common.sh@945 -- # kill 117654 00:10:46.242 16:28:17 -- common/autotest_common.sh@950 -- # wait 117654 00:10:46.808 16:28:18 -- accel/accel.sh@68 -- # trap - ERR 00:10:46.808 16:28:18 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:10:46.808 16:28:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:46.808 16:28:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:46.808 16:28:18 -- common/autotest_common.sh@10 -- # set +x 00:10:46.808 16:28:18 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:10:46.808 16:28:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:46.808 16:28:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:46.808 16:28:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:46.808 16:28:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.808 16:28:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.808 16:28:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:46.808 16:28:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:46.808 16:28:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:46.808 16:28:18 -- accel/accel.sh@42 -- # jq -r . 00:10:46.808 16:28:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.808 16:28:18 -- common/autotest_common.sh@10 -- # set +x 00:10:46.808 16:28:18 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:46.808 16:28:18 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:46.808 16:28:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:46.808 16:28:18 -- common/autotest_common.sh@10 -- # set +x 00:10:46.808 ************************************ 00:10:46.808 START TEST accel_missing_filename 00:10:46.808 ************************************ 00:10:46.808 16:28:18 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:10:46.808 16:28:18 -- common/autotest_common.sh@640 -- # local es=0 00:10:46.808 16:28:18 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:46.808 16:28:18 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:46.808 16:28:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:46.808 16:28:18 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:46.808 16:28:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:46.808 16:28:18 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:10:46.808 16:28:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:46.808 16:28:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:46.808 16:28:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:46.808 16:28:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.808 16:28:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.808 16:28:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:46.808 16:28:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:46.808 16:28:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:46.808 16:28:18 -- accel/accel.sh@42 -- # jq -r . 00:10:46.808 [2024-07-13 16:28:18.275110] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:46.808 [2024-07-13 16:28:18.275402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117717 ] 00:10:47.066 [2024-07-13 16:28:18.432494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.066 [2024-07-13 16:28:18.505157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.324 [2024-07-13 16:28:18.585919] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:47.324 [2024-07-13 16:28:18.708668] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:47.583 A filename is required. 00:10:47.583 16:28:18 -- common/autotest_common.sh@643 -- # es=234 00:10:47.583 16:28:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:47.583 16:28:18 -- common/autotest_common.sh@652 -- # es=106 00:10:47.583 16:28:18 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:47.583 16:28:18 -- common/autotest_common.sh@660 -- # es=1 00:10:47.583 16:28:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:47.583 00:10:47.583 real 0m0.665s 00:10:47.583 user 0m0.383s 00:10:47.583 sys 0m0.232s 00:10:47.583 16:28:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.583 16:28:18 -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 ************************************ 00:10:47.583 END TEST accel_missing_filename 00:10:47.583 ************************************ 00:10:47.583 16:28:18 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:47.583 16:28:18 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:47.583 16:28:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:47.583 16:28:18 -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 ************************************ 00:10:47.583 START TEST accel_compress_verify 00:10:47.583 ************************************ 00:10:47.583 16:28:18 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:47.583 16:28:18 -- common/autotest_common.sh@640 -- # local es=0 00:10:47.583 16:28:18 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:47.583 16:28:18 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:47.583 16:28:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:47.583 16:28:18 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:47.583 16:28:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:47.583 16:28:18 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:47.583 16:28:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:47.583 16:28:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.583 16:28:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.583 16:28:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.583 16:28:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.583 16:28:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.583 16:28:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.583 16:28:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.583 16:28:18 -- accel/accel.sh@42 -- # jq -r . 00:10:47.583 [2024-07-13 16:28:19.005487] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:47.583 [2024-07-13 16:28:19.006435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117749 ] 00:10:47.842 [2024-07-13 16:28:19.162137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.842 [2024-07-13 16:28:19.245108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.101 [2024-07-13 16:28:19.333146] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:48.101 [2024-07-13 16:28:19.458776] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:48.360 00:10:48.360 Compression does not support the verify option, aborting. 00:10:48.360 16:28:19 -- common/autotest_common.sh@643 -- # es=161 00:10:48.360 16:28:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:48.360 16:28:19 -- common/autotest_common.sh@652 -- # es=33 00:10:48.360 16:28:19 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:48.360 16:28:19 -- common/autotest_common.sh@660 -- # es=1 00:10:48.360 16:28:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:48.360 00:10:48.360 real 0m0.687s 00:10:48.360 user 0m0.402s 00:10:48.360 sys 0m0.230s 00:10:48.360 16:28:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.360 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:10:48.360 ************************************ 00:10:48.360 END TEST accel_compress_verify 00:10:48.360 ************************************ 00:10:48.360 16:28:19 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:48.360 16:28:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:48.360 16:28:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:48.360 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:10:48.360 ************************************ 00:10:48.360 START TEST accel_wrong_workload 00:10:48.360 ************************************ 00:10:48.360 16:28:19 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:10:48.360 16:28:19 -- common/autotest_common.sh@640 -- # local es=0 00:10:48.360 16:28:19 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:48.360 16:28:19 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:48.360 16:28:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:48.360 16:28:19 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:48.360 16:28:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:48.360 16:28:19 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:10:48.360 16:28:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:48.360 16:28:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.360 16:28:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.360 16:28:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.360 16:28:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.360 16:28:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.360 16:28:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.360 16:28:19 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.360 16:28:19 -- accel/accel.sh@42 -- # jq -r . 00:10:48.360 Unsupported workload type: foobar 00:10:48.360 [2024-07-13 16:28:19.750679] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:48.360 accel_perf options: 00:10:48.360 [-h help message] 00:10:48.360 [-q queue depth per core] 00:10:48.360 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:48.360 [-T number of threads per core 00:10:48.360 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:48.360 [-t time in seconds] 00:10:48.360 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:48.360 [ dif_verify, , dif_generate, dif_generate_copy 00:10:48.360 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:48.360 [-l for compress/decompress workloads, name of uncompressed input file 00:10:48.360 [-S for crc32c workload, use this seed value (default 0) 00:10:48.360 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:48.360 [-f for fill workload, use this BYTE value (default 255) 00:10:48.360 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:48.360 [-y verify result if this switch is on] 00:10:48.360 [-a tasks to allocate per core (default: same value as -q)] 00:10:48.360 Can be used to spread operations across a wider range of memory. 00:10:48.360 16:28:19 -- common/autotest_common.sh@643 -- # es=1 00:10:48.360 16:28:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:48.360 16:28:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:48.360 16:28:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:48.360 00:10:48.360 real 0m0.068s 00:10:48.360 user 0m0.075s 00:10:48.360 sys 0m0.041s 00:10:48.360 16:28:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.360 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:10:48.360 ************************************ 00:10:48.360 END TEST accel_wrong_workload 00:10:48.360 ************************************ 00:10:48.618 16:28:19 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:48.618 16:28:19 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:48.618 16:28:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:48.618 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:10:48.618 ************************************ 00:10:48.618 START TEST accel_negative_buffers 00:10:48.618 ************************************ 00:10:48.618 16:28:19 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:48.618 16:28:19 -- common/autotest_common.sh@640 -- # local es=0 00:10:48.618 16:28:19 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:48.618 16:28:19 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:48.618 16:28:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:48.618 16:28:19 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:48.618 16:28:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:48.618 16:28:19 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:10:48.618 16:28:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:48.618 16:28:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.618 16:28:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.618 16:28:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.618 16:28:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.618 16:28:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.618 16:28:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.618 16:28:19 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.618 16:28:19 -- accel/accel.sh@42 -- # jq -r . 00:10:48.618 -x option must be non-negative. 00:10:48.618 [2024-07-13 16:28:19.888553] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:48.618 accel_perf options: 00:10:48.618 [-h help message] 00:10:48.618 [-q queue depth per core] 00:10:48.618 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:48.618 [-T number of threads per core 00:10:48.618 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:48.618 [-t time in seconds] 00:10:48.618 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:48.618 [ dif_verify, , dif_generate, dif_generate_copy 00:10:48.618 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:48.618 [-l for compress/decompress workloads, name of uncompressed input file 00:10:48.618 [-S for crc32c workload, use this seed value (default 0) 00:10:48.618 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:48.618 [-f for fill workload, use this BYTE value (default 255) 00:10:48.618 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:48.618 [-y verify result if this switch is on] 00:10:48.618 [-a tasks to allocate per core (default: same value as -q)] 00:10:48.618 Can be used to spread operations across a wider range of memory. 00:10:48.618 16:28:19 -- common/autotest_common.sh@643 -- # es=1 00:10:48.618 16:28:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:48.619 16:28:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:48.619 16:28:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:48.619 00:10:48.619 real 0m0.072s 00:10:48.619 user 0m0.075s 00:10:48.619 sys 0m0.042s 00:10:48.619 16:28:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.619 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:10:48.619 ************************************ 00:10:48.619 END TEST accel_negative_buffers 00:10:48.619 ************************************ 00:10:48.619 16:28:19 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:48.619 16:28:19 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:48.619 16:28:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:48.619 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:10:48.619 ************************************ 00:10:48.619 START TEST accel_crc32c 00:10:48.619 ************************************ 00:10:48.619 16:28:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:48.619 16:28:19 -- accel/accel.sh@16 -- # local accel_opc 00:10:48.619 16:28:19 -- accel/accel.sh@17 -- # local accel_module 00:10:48.619 16:28:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:48.619 16:28:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:48.619 16:28:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.619 16:28:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.619 16:28:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.619 16:28:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.619 16:28:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.619 16:28:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.619 16:28:19 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.619 16:28:19 -- accel/accel.sh@42 -- # jq -r . 00:10:48.619 [2024-07-13 16:28:20.022947] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:48.619 [2024-07-13 16:28:20.023239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117830 ] 00:10:48.878 [2024-07-13 16:28:20.186610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.878 [2024-07-13 16:28:20.269344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.256 16:28:21 -- accel/accel.sh@18 -- # out=' 00:10:50.256 SPDK Configuration: 00:10:50.256 Core mask: 0x1 00:10:50.256 00:10:50.256 Accel Perf Configuration: 00:10:50.256 Workload Type: crc32c 00:10:50.256 CRC-32C seed: 32 00:10:50.256 Transfer size: 4096 bytes 00:10:50.256 Vector count 1 00:10:50.256 Module: software 00:10:50.256 Queue depth: 32 00:10:50.256 Allocate depth: 32 00:10:50.256 # threads/core: 1 00:10:50.256 Run time: 1 seconds 00:10:50.256 Verify: Yes 00:10:50.256 00:10:50.256 Running for 1 seconds... 00:10:50.256 00:10:50.256 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:50.256 ------------------------------------------------------------------------------------ 00:10:50.256 0,0 499136/s 1949 MiB/s 0 0 00:10:50.256 ==================================================================================== 00:10:50.256 Total 499136/s 1949 MiB/s 0 0' 00:10:50.256 16:28:21 -- accel/accel.sh@20 -- # IFS=: 00:10:50.256 16:28:21 -- accel/accel.sh@20 -- # read -r var val 00:10:50.256 16:28:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:50.256 16:28:21 -- accel/accel.sh@12 -- # build_accel_config 00:10:50.256 16:28:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:50.256 16:28:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:50.256 16:28:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.256 16:28:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.256 16:28:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:50.256 16:28:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:50.256 16:28:21 -- accel/accel.sh@41 -- # local IFS=, 00:10:50.256 16:28:21 -- accel/accel.sh@42 -- # jq -r . 00:10:50.256 [2024-07-13 16:28:21.710555] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:50.256 [2024-07-13 16:28:21.711628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117867 ] 00:10:50.516 [2024-07-13 16:28:21.866427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.516 [2024-07-13 16:28:21.959049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.804 16:28:22 -- accel/accel.sh@21 -- # val= 00:10:50.804 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.804 16:28:22 -- accel/accel.sh@21 -- # val= 00:10:50.804 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.804 16:28:22 -- accel/accel.sh@21 -- # val=0x1 00:10:50.804 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.804 16:28:22 -- accel/accel.sh@21 -- # val= 00:10:50.804 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.804 16:28:22 -- accel/accel.sh@21 -- # val= 00:10:50.804 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.804 16:28:22 -- accel/accel.sh@21 -- # val=crc32c 00:10:50.804 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.804 16:28:22 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.804 16:28:22 -- accel/accel.sh@21 -- # val=32 00:10:50.804 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.804 16:28:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:50.804 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.804 16:28:22 -- accel/accel.sh@21 -- # val= 00:10:50.804 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.804 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.804 16:28:22 -- accel/accel.sh@21 -- # val=software 00:10:50.804 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.804 16:28:22 -- accel/accel.sh@23 -- # accel_module=software 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.805 16:28:22 -- accel/accel.sh@21 -- # val=32 00:10:50.805 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.805 16:28:22 -- accel/accel.sh@21 -- # val=32 00:10:50.805 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.805 16:28:22 -- accel/accel.sh@21 -- # val=1 00:10:50.805 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.805 16:28:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:50.805 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.805 16:28:22 -- accel/accel.sh@21 -- # val=Yes 00:10:50.805 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.805 16:28:22 -- accel/accel.sh@21 -- # val= 00:10:50.805 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:50.805 16:28:22 -- accel/accel.sh@21 -- # val= 00:10:50.805 16:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # IFS=: 00:10:50.805 16:28:22 -- accel/accel.sh@20 -- # read -r var val 00:10:52.204 16:28:23 -- accel/accel.sh@21 -- # val= 00:10:52.204 16:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.204 16:28:23 -- accel/accel.sh@20 -- # IFS=: 00:10:52.204 16:28:23 -- accel/accel.sh@20 -- # read -r var val 00:10:52.204 16:28:23 -- accel/accel.sh@21 -- # val= 00:10:52.204 16:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.204 16:28:23 -- accel/accel.sh@20 -- # IFS=: 00:10:52.204 16:28:23 -- accel/accel.sh@20 -- # read -r var val 00:10:52.204 16:28:23 -- accel/accel.sh@21 -- # val= 00:10:52.204 16:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.204 16:28:23 -- accel/accel.sh@20 -- # IFS=: 00:10:52.204 16:28:23 -- accel/accel.sh@20 -- # read -r var val 00:10:52.204 16:28:23 -- accel/accel.sh@21 -- # val= 00:10:52.204 16:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.204 16:28:23 -- accel/accel.sh@20 -- # IFS=: 00:10:52.204 16:28:23 -- accel/accel.sh@20 -- # read -r var val 00:10:52.204 16:28:23 -- accel/accel.sh@21 -- # val= 00:10:52.204 16:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.204 16:28:23 -- accel/accel.sh@20 -- # IFS=: 00:10:52.204 16:28:23 -- accel/accel.sh@20 -- # read -r var val 00:10:52.204 16:28:23 -- accel/accel.sh@21 -- # val= 00:10:52.204 16:28:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.204 16:28:23 -- accel/accel.sh@20 -- # IFS=: 00:10:52.204 16:28:23 -- accel/accel.sh@20 -- # read -r var val 00:10:52.204 16:28:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:52.204 16:28:23 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:52.204 16:28:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:52.204 00:10:52.204 real 0m3.386s 00:10:52.204 user 0m2.810s 00:10:52.204 sys 0m0.412s 00:10:52.204 16:28:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.204 16:28:23 -- common/autotest_common.sh@10 -- # set +x 00:10:52.204 ************************************ 00:10:52.204 END TEST accel_crc32c 00:10:52.204 ************************************ 00:10:52.204 16:28:23 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:52.204 16:28:23 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:52.204 16:28:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:52.204 16:28:23 -- common/autotest_common.sh@10 -- # set +x 00:10:52.204 ************************************ 00:10:52.204 START TEST accel_crc32c_C2 00:10:52.204 ************************************ 00:10:52.204 16:28:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:52.204 16:28:23 -- accel/accel.sh@16 -- # local accel_opc 00:10:52.204 16:28:23 -- accel/accel.sh@17 -- # local accel_module 00:10:52.205 16:28:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:52.205 16:28:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:52.205 16:28:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:52.205 16:28:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:52.205 16:28:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.205 16:28:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.205 16:28:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:52.205 16:28:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:52.205 16:28:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:52.205 16:28:23 -- accel/accel.sh@42 -- # jq -r . 00:10:52.205 [2024-07-13 16:28:23.466546] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:52.205 [2024-07-13 16:28:23.467699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117900 ] 00:10:52.205 [2024-07-13 16:28:23.626985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.463 [2024-07-13 16:28:23.699748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.838 16:28:25 -- accel/accel.sh@18 -- # out=' 00:10:53.838 SPDK Configuration: 00:10:53.838 Core mask: 0x1 00:10:53.838 00:10:53.838 Accel Perf Configuration: 00:10:53.838 Workload Type: crc32c 00:10:53.838 CRC-32C seed: 0 00:10:53.838 Transfer size: 4096 bytes 00:10:53.838 Vector count 2 00:10:53.838 Module: software 00:10:53.838 Queue depth: 32 00:10:53.838 Allocate depth: 32 00:10:53.838 # threads/core: 1 00:10:53.838 Run time: 1 seconds 00:10:53.838 Verify: Yes 00:10:53.838 00:10:53.838 Running for 1 seconds... 00:10:53.838 00:10:53.838 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:53.838 ------------------------------------------------------------------------------------ 00:10:53.838 0,0 415712/s 3247 MiB/s 0 0 00:10:53.838 ==================================================================================== 00:10:53.838 Total 415712/s 1623 MiB/s 0 0' 00:10:53.838 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:53.838 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:53.838 16:28:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:53.838 16:28:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:53.838 16:28:25 -- accel/accel.sh@12 -- # build_accel_config 00:10:53.838 16:28:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:53.838 16:28:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:53.838 16:28:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.838 16:28:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:53.838 16:28:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:53.838 16:28:25 -- accel/accel.sh@41 -- # local IFS=, 00:10:53.838 16:28:25 -- accel/accel.sh@42 -- # jq -r . 00:10:53.838 [2024-07-13 16:28:25.126248] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:53.838 [2024-07-13 16:28:25.126567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117935 ] 00:10:53.838 [2024-07-13 16:28:25.271726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.096 [2024-07-13 16:28:25.358016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val= 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val= 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val=0x1 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val= 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val= 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val=crc32c 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val=0 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val= 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val=software 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@23 -- # accel_module=software 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val=32 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val=32 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val=1 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val=Yes 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val= 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.096 16:28:25 -- accel/accel.sh@21 -- # val= 00:10:54.096 16:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.096 16:28:25 -- accel/accel.sh@20 -- # read -r var val 00:10:55.471 16:28:26 -- accel/accel.sh@21 -- # val= 00:10:55.471 16:28:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.471 16:28:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.471 16:28:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.471 16:28:26 -- accel/accel.sh@21 -- # val= 00:10:55.471 16:28:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.472 16:28:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.472 16:28:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.472 16:28:26 -- accel/accel.sh@21 -- # val= 00:10:55.472 16:28:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.472 16:28:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.472 16:28:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.472 16:28:26 -- accel/accel.sh@21 -- # val= 00:10:55.472 16:28:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.472 16:28:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.472 16:28:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.472 16:28:26 -- accel/accel.sh@21 -- # val= 00:10:55.472 16:28:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.472 16:28:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.472 16:28:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.472 16:28:26 -- accel/accel.sh@21 -- # val= 00:10:55.472 16:28:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.472 16:28:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.472 16:28:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.472 16:28:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:55.472 16:28:26 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:55.472 16:28:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:55.472 00:10:55.472 real 0m3.332s 00:10:55.472 user 0m2.744s 00:10:55.472 sys 0m0.413s 00:10:55.472 16:28:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.472 ************************************ 00:10:55.472 END TEST accel_crc32c_C2 00:10:55.472 ************************************ 00:10:55.472 16:28:26 -- common/autotest_common.sh@10 -- # set +x 00:10:55.472 16:28:26 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:55.472 16:28:26 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:55.472 16:28:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:55.472 16:28:26 -- common/autotest_common.sh@10 -- # set +x 00:10:55.472 ************************************ 00:10:55.472 START TEST accel_copy 00:10:55.472 ************************************ 00:10:55.472 16:28:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:55.472 16:28:26 -- accel/accel.sh@16 -- # local accel_opc 00:10:55.472 16:28:26 -- accel/accel.sh@17 -- # local accel_module 00:10:55.472 16:28:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:55.472 16:28:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:55.472 16:28:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:55.472 16:28:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:55.472 16:28:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.472 16:28:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.472 16:28:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:55.472 16:28:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:55.472 16:28:26 -- accel/accel.sh@41 -- # local IFS=, 00:10:55.472 16:28:26 -- accel/accel.sh@42 -- # jq -r . 00:10:55.472 [2024-07-13 16:28:26.856664] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:55.472 [2024-07-13 16:28:26.856888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117981 ] 00:10:55.730 [2024-07-13 16:28:27.003523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.730 [2024-07-13 16:28:27.087933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.105 16:28:28 -- accel/accel.sh@18 -- # out=' 00:10:57.105 SPDK Configuration: 00:10:57.105 Core mask: 0x1 00:10:57.105 00:10:57.105 Accel Perf Configuration: 00:10:57.105 Workload Type: copy 00:10:57.105 Transfer size: 4096 bytes 00:10:57.105 Vector count 1 00:10:57.105 Module: software 00:10:57.105 Queue depth: 32 00:10:57.105 Allocate depth: 32 00:10:57.105 # threads/core: 1 00:10:57.105 Run time: 1 seconds 00:10:57.105 Verify: Yes 00:10:57.105 00:10:57.105 Running for 1 seconds... 00:10:57.105 00:10:57.105 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:57.105 ------------------------------------------------------------------------------------ 00:10:57.105 0,0 348544/s 1361 MiB/s 0 0 00:10:57.105 ==================================================================================== 00:10:57.105 Total 348544/s 1361 MiB/s 0 0' 00:10:57.105 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.105 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.105 16:28:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:57.105 16:28:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:57.105 16:28:28 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.105 16:28:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:57.105 16:28:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.105 16:28:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.105 16:28:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:57.105 16:28:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:57.105 16:28:28 -- accel/accel.sh@41 -- # local IFS=, 00:10:57.105 16:28:28 -- accel/accel.sh@42 -- # jq -r . 00:10:57.105 [2024-07-13 16:28:28.532662] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:57.105 [2024-07-13 16:28:28.533996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118003 ] 00:10:57.363 [2024-07-13 16:28:28.699382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.363 [2024-07-13 16:28:28.797818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val= 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val= 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val=0x1 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val= 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val= 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val=copy 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val= 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val=software 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@23 -- # accel_module=software 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val=32 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val=32 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val=1 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val=Yes 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val= 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:57.622 16:28:28 -- accel/accel.sh@21 -- # val= 00:10:57.622 16:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # IFS=: 00:10:57.622 16:28:28 -- accel/accel.sh@20 -- # read -r var val 00:10:58.998 16:28:30 -- accel/accel.sh@21 -- # val= 00:10:58.998 16:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.998 16:28:30 -- accel/accel.sh@20 -- # IFS=: 00:10:58.998 16:28:30 -- accel/accel.sh@20 -- # read -r var val 00:10:58.998 16:28:30 -- accel/accel.sh@21 -- # val= 00:10:58.998 16:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.998 16:28:30 -- accel/accel.sh@20 -- # IFS=: 00:10:58.998 16:28:30 -- accel/accel.sh@20 -- # read -r var val 00:10:58.998 16:28:30 -- accel/accel.sh@21 -- # val= 00:10:58.998 16:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.998 16:28:30 -- accel/accel.sh@20 -- # IFS=: 00:10:58.998 16:28:30 -- accel/accel.sh@20 -- # read -r var val 00:10:58.998 16:28:30 -- accel/accel.sh@21 -- # val= 00:10:58.998 16:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.998 16:28:30 -- accel/accel.sh@20 -- # IFS=: 00:10:58.998 16:28:30 -- accel/accel.sh@20 -- # read -r var val 00:10:58.998 16:28:30 -- accel/accel.sh@21 -- # val= 00:10:58.998 16:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.998 16:28:30 -- accel/accel.sh@20 -- # IFS=: 00:10:58.998 16:28:30 -- accel/accel.sh@20 -- # read -r var val 00:10:58.998 16:28:30 -- accel/accel.sh@21 -- # val= 00:10:58.998 16:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.998 16:28:30 -- accel/accel.sh@20 -- # IFS=: 00:10:58.998 16:28:30 -- accel/accel.sh@20 -- # read -r var val 00:10:58.998 16:28:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:58.998 16:28:30 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:58.998 16:28:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:58.998 00:10:58.998 real 0m3.385s 00:10:58.998 user 0m2.797s 00:10:58.998 sys 0m0.443s 00:10:58.998 16:28:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.998 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:10:58.998 ************************************ 00:10:58.998 END TEST accel_copy 00:10:58.998 ************************************ 00:10:58.998 16:28:30 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:58.998 16:28:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:58.998 16:28:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:58.998 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:10:58.998 ************************************ 00:10:58.998 START TEST accel_fill 00:10:58.998 ************************************ 00:10:58.998 16:28:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:58.998 16:28:30 -- accel/accel.sh@16 -- # local accel_opc 00:10:58.998 16:28:30 -- accel/accel.sh@17 -- # local accel_module 00:10:58.998 16:28:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:58.998 16:28:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:58.998 16:28:30 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.998 16:28:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.998 16:28:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.998 16:28:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.998 16:28:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.998 16:28:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.998 16:28:30 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.998 16:28:30 -- accel/accel.sh@42 -- # jq -r . 00:10:58.998 [2024-07-13 16:28:30.308407] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:58.998 [2024-07-13 16:28:30.308605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118048 ] 00:10:58.998 [2024-07-13 16:28:30.450811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.257 [2024-07-13 16:28:30.522234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.640 16:28:31 -- accel/accel.sh@18 -- # out=' 00:11:00.640 SPDK Configuration: 00:11:00.640 Core mask: 0x1 00:11:00.640 00:11:00.640 Accel Perf Configuration: 00:11:00.640 Workload Type: fill 00:11:00.640 Fill pattern: 0x80 00:11:00.640 Transfer size: 4096 bytes 00:11:00.640 Vector count 1 00:11:00.640 Module: software 00:11:00.640 Queue depth: 64 00:11:00.640 Allocate depth: 64 00:11:00.640 # threads/core: 1 00:11:00.640 Run time: 1 seconds 00:11:00.640 Verify: Yes 00:11:00.640 00:11:00.640 Running for 1 seconds... 00:11:00.640 00:11:00.640 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:00.641 ------------------------------------------------------------------------------------ 00:11:00.641 0,0 548096/s 2141 MiB/s 0 0 00:11:00.641 ==================================================================================== 00:11:00.641 Total 548096/s 2141 MiB/s 0 0' 00:11:00.641 16:28:31 -- accel/accel.sh@20 -- # IFS=: 00:11:00.641 16:28:31 -- accel/accel.sh@20 -- # read -r var val 00:11:00.641 16:28:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:00.641 16:28:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:00.641 16:28:31 -- accel/accel.sh@12 -- # build_accel_config 00:11:00.641 16:28:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:00.641 16:28:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:00.641 16:28:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:00.641 16:28:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:00.641 16:28:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:00.641 16:28:31 -- accel/accel.sh@41 -- # local IFS=, 00:11:00.641 16:28:31 -- accel/accel.sh@42 -- # jq -r . 00:11:00.641 [2024-07-13 16:28:31.957317] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:00.641 [2024-07-13 16:28:31.957664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118080 ] 00:11:00.899 [2024-07-13 16:28:32.115705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.899 [2024-07-13 16:28:32.207151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.899 16:28:32 -- accel/accel.sh@21 -- # val= 00:11:00.899 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.899 16:28:32 -- accel/accel.sh@21 -- # val= 00:11:00.899 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.899 16:28:32 -- accel/accel.sh@21 -- # val=0x1 00:11:00.899 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.899 16:28:32 -- accel/accel.sh@21 -- # val= 00:11:00.899 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.899 16:28:32 -- accel/accel.sh@21 -- # val= 00:11:00.899 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.899 16:28:32 -- accel/accel.sh@21 -- # val=fill 00:11:00.899 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.899 16:28:32 -- accel/accel.sh@24 -- # accel_opc=fill 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.899 16:28:32 -- accel/accel.sh@21 -- # val=0x80 00:11:00.899 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.899 16:28:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:00.899 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.899 16:28:32 -- accel/accel.sh@21 -- # val= 00:11:00.899 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.899 16:28:32 -- accel/accel.sh@21 -- # val=software 00:11:00.899 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.899 16:28:32 -- accel/accel.sh@23 -- # accel_module=software 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.899 16:28:32 -- accel/accel.sh@21 -- # val=64 00:11:00.899 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.899 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.899 16:28:32 -- accel/accel.sh@21 -- # val=64 00:11:00.899 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.900 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.900 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.900 16:28:32 -- accel/accel.sh@21 -- # val=1 00:11:00.900 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.900 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.900 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.900 16:28:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:00.900 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.900 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.900 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.900 16:28:32 -- accel/accel.sh@21 -- # val=Yes 00:11:00.900 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.900 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.900 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.900 16:28:32 -- accel/accel.sh@21 -- # val= 00:11:00.900 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.900 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.900 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:00.900 16:28:32 -- accel/accel.sh@21 -- # val= 00:11:00.900 16:28:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.900 16:28:32 -- accel/accel.sh@20 -- # IFS=: 00:11:00.900 16:28:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.299 16:28:33 -- accel/accel.sh@21 -- # val= 00:11:02.299 16:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.299 16:28:33 -- accel/accel.sh@20 -- # IFS=: 00:11:02.299 16:28:33 -- accel/accel.sh@20 -- # read -r var val 00:11:02.299 16:28:33 -- accel/accel.sh@21 -- # val= 00:11:02.299 16:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.299 16:28:33 -- accel/accel.sh@20 -- # IFS=: 00:11:02.299 16:28:33 -- accel/accel.sh@20 -- # read -r var val 00:11:02.299 16:28:33 -- accel/accel.sh@21 -- # val= 00:11:02.299 16:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.299 16:28:33 -- accel/accel.sh@20 -- # IFS=: 00:11:02.299 16:28:33 -- accel/accel.sh@20 -- # read -r var val 00:11:02.299 16:28:33 -- accel/accel.sh@21 -- # val= 00:11:02.299 16:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.299 16:28:33 -- accel/accel.sh@20 -- # IFS=: 00:11:02.299 16:28:33 -- accel/accel.sh@20 -- # read -r var val 00:11:02.299 16:28:33 -- accel/accel.sh@21 -- # val= 00:11:02.299 16:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.299 16:28:33 -- accel/accel.sh@20 -- # IFS=: 00:11:02.299 16:28:33 -- accel/accel.sh@20 -- # read -r var val 00:11:02.299 16:28:33 -- accel/accel.sh@21 -- # val= 00:11:02.299 16:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.299 16:28:33 -- accel/accel.sh@20 -- # IFS=: 00:11:02.299 16:28:33 -- accel/accel.sh@20 -- # read -r var val 00:11:02.299 16:28:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:02.299 16:28:33 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:11:02.299 16:28:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:02.299 00:11:02.299 real 0m3.339s 00:11:02.299 user 0m2.767s 00:11:02.299 sys 0m0.396s 00:11:02.299 16:28:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.299 16:28:33 -- common/autotest_common.sh@10 -- # set +x 00:11:02.299 ************************************ 00:11:02.299 END TEST accel_fill 00:11:02.299 ************************************ 00:11:02.299 16:28:33 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:11:02.299 16:28:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:02.299 16:28:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:02.299 16:28:33 -- common/autotest_common.sh@10 -- # set +x 00:11:02.299 ************************************ 00:11:02.299 START TEST accel_copy_crc32c 00:11:02.299 ************************************ 00:11:02.299 16:28:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:11:02.299 16:28:33 -- accel/accel.sh@16 -- # local accel_opc 00:11:02.299 16:28:33 -- accel/accel.sh@17 -- # local accel_module 00:11:02.299 16:28:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:11:02.299 16:28:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:11:02.299 16:28:33 -- accel/accel.sh@12 -- # build_accel_config 00:11:02.299 16:28:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:02.299 16:28:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:02.299 16:28:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:02.299 16:28:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:02.299 16:28:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:02.299 16:28:33 -- accel/accel.sh@41 -- # local IFS=, 00:11:02.299 16:28:33 -- accel/accel.sh@42 -- # jq -r . 00:11:02.299 [2024-07-13 16:28:33.716067] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:02.299 [2024-07-13 16:28:33.717209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118118 ] 00:11:02.558 [2024-07-13 16:28:33.871944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.558 [2024-07-13 16:28:33.943514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.935 16:28:35 -- accel/accel.sh@18 -- # out=' 00:11:03.935 SPDK Configuration: 00:11:03.935 Core mask: 0x1 00:11:03.935 00:11:03.935 Accel Perf Configuration: 00:11:03.935 Workload Type: copy_crc32c 00:11:03.935 CRC-32C seed: 0 00:11:03.935 Vector size: 4096 bytes 00:11:03.935 Transfer size: 4096 bytes 00:11:03.935 Vector count 1 00:11:03.935 Module: software 00:11:03.935 Queue depth: 32 00:11:03.935 Allocate depth: 32 00:11:03.935 # threads/core: 1 00:11:03.935 Run time: 1 seconds 00:11:03.935 Verify: Yes 00:11:03.935 00:11:03.935 Running for 1 seconds... 00:11:03.935 00:11:03.935 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:03.935 ------------------------------------------------------------------------------------ 00:11:03.935 0,0 269920/s 1054 MiB/s 0 0 00:11:03.935 ==================================================================================== 00:11:03.935 Total 269920/s 1054 MiB/s 0 0' 00:11:03.935 16:28:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:11:03.935 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:03.935 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:03.935 16:28:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:11:03.935 16:28:35 -- accel/accel.sh@12 -- # build_accel_config 00:11:03.935 16:28:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:03.935 16:28:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.935 16:28:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.935 16:28:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:03.935 16:28:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:03.935 16:28:35 -- accel/accel.sh@41 -- # local IFS=, 00:11:03.935 16:28:35 -- accel/accel.sh@42 -- # jq -r . 00:11:03.935 [2024-07-13 16:28:35.378304] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:03.935 [2024-07-13 16:28:35.378594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118148 ] 00:11:04.194 [2024-07-13 16:28:35.535014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.194 [2024-07-13 16:28:35.628252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.453 16:28:35 -- accel/accel.sh@21 -- # val= 00:11:04.453 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.453 16:28:35 -- accel/accel.sh@21 -- # val= 00:11:04.453 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.453 16:28:35 -- accel/accel.sh@21 -- # val=0x1 00:11:04.453 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.453 16:28:35 -- accel/accel.sh@21 -- # val= 00:11:04.453 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.453 16:28:35 -- accel/accel.sh@21 -- # val= 00:11:04.453 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.453 16:28:35 -- accel/accel.sh@21 -- # val=copy_crc32c 00:11:04.453 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.453 16:28:35 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.453 16:28:35 -- accel/accel.sh@21 -- # val=0 00:11:04.453 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.453 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.454 16:28:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:04.454 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.454 16:28:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:04.454 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.454 16:28:35 -- accel/accel.sh@21 -- # val= 00:11:04.454 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.454 16:28:35 -- accel/accel.sh@21 -- # val=software 00:11:04.454 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.454 16:28:35 -- accel/accel.sh@23 -- # accel_module=software 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.454 16:28:35 -- accel/accel.sh@21 -- # val=32 00:11:04.454 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.454 16:28:35 -- accel/accel.sh@21 -- # val=32 00:11:04.454 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.454 16:28:35 -- accel/accel.sh@21 -- # val=1 00:11:04.454 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.454 16:28:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:04.454 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.454 16:28:35 -- accel/accel.sh@21 -- # val=Yes 00:11:04.454 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.454 16:28:35 -- accel/accel.sh@21 -- # val= 00:11:04.454 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:04.454 16:28:35 -- accel/accel.sh@21 -- # val= 00:11:04.454 16:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # IFS=: 00:11:04.454 16:28:35 -- accel/accel.sh@20 -- # read -r var val 00:11:05.835 16:28:37 -- accel/accel.sh@21 -- # val= 00:11:05.835 16:28:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.835 16:28:37 -- accel/accel.sh@20 -- # IFS=: 00:11:05.835 16:28:37 -- accel/accel.sh@20 -- # read -r var val 00:11:05.835 16:28:37 -- accel/accel.sh@21 -- # val= 00:11:05.835 16:28:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.835 16:28:37 -- accel/accel.sh@20 -- # IFS=: 00:11:05.835 16:28:37 -- accel/accel.sh@20 -- # read -r var val 00:11:05.835 16:28:37 -- accel/accel.sh@21 -- # val= 00:11:05.835 16:28:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.835 16:28:37 -- accel/accel.sh@20 -- # IFS=: 00:11:05.835 16:28:37 -- accel/accel.sh@20 -- # read -r var val 00:11:05.835 16:28:37 -- accel/accel.sh@21 -- # val= 00:11:05.835 16:28:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.835 16:28:37 -- accel/accel.sh@20 -- # IFS=: 00:11:05.835 16:28:37 -- accel/accel.sh@20 -- # read -r var val 00:11:05.835 16:28:37 -- accel/accel.sh@21 -- # val= 00:11:05.835 16:28:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.835 16:28:37 -- accel/accel.sh@20 -- # IFS=: 00:11:05.835 16:28:37 -- accel/accel.sh@20 -- # read -r var val 00:11:05.835 16:28:37 -- accel/accel.sh@21 -- # val= 00:11:05.835 16:28:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.835 16:28:37 -- accel/accel.sh@20 -- # IFS=: 00:11:05.835 16:28:37 -- accel/accel.sh@20 -- # read -r var val 00:11:05.835 16:28:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:05.835 16:28:37 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:11:05.835 16:28:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:05.835 00:11:05.835 real 0m3.357s 00:11:05.835 user 0m2.756s 00:11:05.835 sys 0m0.432s 00:11:05.835 16:28:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.836 16:28:37 -- common/autotest_common.sh@10 -- # set +x 00:11:05.836 ************************************ 00:11:05.836 END TEST accel_copy_crc32c 00:11:05.836 ************************************ 00:11:05.836 16:28:37 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:11:05.836 16:28:37 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:05.836 16:28:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:05.836 16:28:37 -- common/autotest_common.sh@10 -- # set +x 00:11:05.836 ************************************ 00:11:05.836 START TEST accel_copy_crc32c_C2 00:11:05.836 ************************************ 00:11:05.836 16:28:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:11:05.836 16:28:37 -- accel/accel.sh@16 -- # local accel_opc 00:11:05.836 16:28:37 -- accel/accel.sh@17 -- # local accel_module 00:11:05.836 16:28:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:05.836 16:28:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:05.836 16:28:37 -- accel/accel.sh@12 -- # build_accel_config 00:11:05.836 16:28:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:05.836 16:28:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:05.836 16:28:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:05.836 16:28:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:05.836 16:28:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:05.836 16:28:37 -- accel/accel.sh@41 -- # local IFS=, 00:11:05.836 16:28:37 -- accel/accel.sh@42 -- # jq -r . 00:11:05.836 [2024-07-13 16:28:37.136557] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:05.836 [2024-07-13 16:28:37.137022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118193 ] 00:11:05.836 [2024-07-13 16:28:37.291950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.094 [2024-07-13 16:28:37.367565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.470 16:28:38 -- accel/accel.sh@18 -- # out=' 00:11:07.470 SPDK Configuration: 00:11:07.470 Core mask: 0x1 00:11:07.470 00:11:07.470 Accel Perf Configuration: 00:11:07.470 Workload Type: copy_crc32c 00:11:07.470 CRC-32C seed: 0 00:11:07.470 Vector size: 4096 bytes 00:11:07.470 Transfer size: 8192 bytes 00:11:07.470 Vector count 2 00:11:07.470 Module: software 00:11:07.470 Queue depth: 32 00:11:07.470 Allocate depth: 32 00:11:07.470 # threads/core: 1 00:11:07.470 Run time: 1 seconds 00:11:07.471 Verify: Yes 00:11:07.471 00:11:07.471 Running for 1 seconds... 00:11:07.471 00:11:07.471 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:07.471 ------------------------------------------------------------------------------------ 00:11:07.471 0,0 192256/s 1502 MiB/s 0 0 00:11:07.471 ==================================================================================== 00:11:07.471 Total 192256/s 751 MiB/s 0 0' 00:11:07.471 16:28:38 -- accel/accel.sh@20 -- # IFS=: 00:11:07.471 16:28:38 -- accel/accel.sh@20 -- # read -r var val 00:11:07.471 16:28:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:07.471 16:28:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:07.471 16:28:38 -- accel/accel.sh@12 -- # build_accel_config 00:11:07.471 16:28:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:07.471 16:28:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:07.471 16:28:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:07.471 16:28:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:07.471 16:28:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:07.471 16:28:38 -- accel/accel.sh@41 -- # local IFS=, 00:11:07.471 16:28:38 -- accel/accel.sh@42 -- # jq -r . 00:11:07.471 [2024-07-13 16:28:38.793761] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:07.471 [2024-07-13 16:28:38.794020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118216 ] 00:11:07.729 [2024-07-13 16:28:38.943917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.729 [2024-07-13 16:28:39.034531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val= 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val= 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val=0x1 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val= 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val= 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val=copy_crc32c 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val=0 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val='8192 bytes' 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val= 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val=software 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@23 -- # accel_module=software 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val=32 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val=32 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val=1 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val=Yes 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val= 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:07.729 16:28:39 -- accel/accel.sh@21 -- # val= 00:11:07.729 16:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # IFS=: 00:11:07.729 16:28:39 -- accel/accel.sh@20 -- # read -r var val 00:11:09.106 16:28:40 -- accel/accel.sh@21 -- # val= 00:11:09.106 16:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.106 16:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:09.106 16:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:09.106 16:28:40 -- accel/accel.sh@21 -- # val= 00:11:09.106 16:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.106 16:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:09.106 16:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:09.106 16:28:40 -- accel/accel.sh@21 -- # val= 00:11:09.106 16:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.106 16:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:09.106 16:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:09.106 16:28:40 -- accel/accel.sh@21 -- # val= 00:11:09.106 16:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.106 16:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:09.106 16:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:09.106 16:28:40 -- accel/accel.sh@21 -- # val= 00:11:09.106 16:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.106 16:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:09.106 16:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:09.106 16:28:40 -- accel/accel.sh@21 -- # val= 00:11:09.106 16:28:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.106 16:28:40 -- accel/accel.sh@20 -- # IFS=: 00:11:09.106 16:28:40 -- accel/accel.sh@20 -- # read -r var val 00:11:09.106 16:28:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:09.106 16:28:40 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:11:09.106 16:28:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:09.106 00:11:09.106 real 0m3.346s 00:11:09.106 user 0m2.742s 00:11:09.106 sys 0m0.442s 00:11:09.106 16:28:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.106 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:11:09.106 ************************************ 00:11:09.106 END TEST accel_copy_crc32c_C2 00:11:09.106 ************************************ 00:11:09.107 16:28:40 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:11:09.107 16:28:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:09.107 16:28:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:09.107 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:11:09.107 ************************************ 00:11:09.107 START TEST accel_dualcast 00:11:09.107 ************************************ 00:11:09.107 16:28:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:11:09.107 16:28:40 -- accel/accel.sh@16 -- # local accel_opc 00:11:09.107 16:28:40 -- accel/accel.sh@17 -- # local accel_module 00:11:09.107 16:28:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:11:09.107 16:28:40 -- accel/accel.sh@12 -- # build_accel_config 00:11:09.107 16:28:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:09.107 16:28:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:09.107 16:28:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:09.107 16:28:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:09.107 16:28:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:09.107 16:28:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:09.107 16:28:40 -- accel/accel.sh@41 -- # local IFS=, 00:11:09.107 16:28:40 -- accel/accel.sh@42 -- # jq -r . 00:11:09.107 [2024-07-13 16:28:40.545523] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:09.107 [2024-07-13 16:28:40.545800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118261 ] 00:11:09.365 [2024-07-13 16:28:40.700353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.365 [2024-07-13 16:28:40.773098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.743 16:28:42 -- accel/accel.sh@18 -- # out=' 00:11:10.743 SPDK Configuration: 00:11:10.743 Core mask: 0x1 00:11:10.743 00:11:10.743 Accel Perf Configuration: 00:11:10.743 Workload Type: dualcast 00:11:10.743 Transfer size: 4096 bytes 00:11:10.743 Vector count 1 00:11:10.743 Module: software 00:11:10.743 Queue depth: 32 00:11:10.743 Allocate depth: 32 00:11:10.743 # threads/core: 1 00:11:10.743 Run time: 1 seconds 00:11:10.743 Verify: Yes 00:11:10.743 00:11:10.743 Running for 1 seconds... 00:11:10.743 00:11:10.743 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:10.743 ------------------------------------------------------------------------------------ 00:11:10.743 0,0 380384/s 1485 MiB/s 0 0 00:11:10.743 ==================================================================================== 00:11:10.743 Total 380384/s 1485 MiB/s 0 0' 00:11:10.743 16:28:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:11:10.743 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:10.743 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:10.743 16:28:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:10.743 16:28:42 -- accel/accel.sh@12 -- # build_accel_config 00:11:10.743 16:28:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:10.743 16:28:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:10.743 16:28:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:10.743 16:28:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:10.743 16:28:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:10.743 16:28:42 -- accel/accel.sh@41 -- # local IFS=, 00:11:10.743 16:28:42 -- accel/accel.sh@42 -- # jq -r . 00:11:10.743 [2024-07-13 16:28:42.207610] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:10.743 [2024-07-13 16:28:42.207802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118297 ] 00:11:11.013 [2024-07-13 16:28:42.350139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.013 [2024-07-13 16:28:42.435619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val= 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val= 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val=0x1 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val= 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val= 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val=dualcast 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val= 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val=software 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@23 -- # accel_module=software 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val=32 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val=32 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val=1 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val=Yes 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val= 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:11.282 16:28:42 -- accel/accel.sh@21 -- # val= 00:11:11.282 16:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # IFS=: 00:11:11.282 16:28:42 -- accel/accel.sh@20 -- # read -r var val 00:11:12.663 16:28:43 -- accel/accel.sh@21 -- # val= 00:11:12.663 16:28:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.663 16:28:43 -- accel/accel.sh@20 -- # IFS=: 00:11:12.663 16:28:43 -- accel/accel.sh@20 -- # read -r var val 00:11:12.663 16:28:43 -- accel/accel.sh@21 -- # val= 00:11:12.663 16:28:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.663 16:28:43 -- accel/accel.sh@20 -- # IFS=: 00:11:12.663 16:28:43 -- accel/accel.sh@20 -- # read -r var val 00:11:12.663 16:28:43 -- accel/accel.sh@21 -- # val= 00:11:12.663 16:28:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.663 16:28:43 -- accel/accel.sh@20 -- # IFS=: 00:11:12.663 16:28:43 -- accel/accel.sh@20 -- # read -r var val 00:11:12.663 16:28:43 -- accel/accel.sh@21 -- # val= 00:11:12.663 16:28:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.663 16:28:43 -- accel/accel.sh@20 -- # IFS=: 00:11:12.663 16:28:43 -- accel/accel.sh@20 -- # read -r var val 00:11:12.663 16:28:43 -- accel/accel.sh@21 -- # val= 00:11:12.663 16:28:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.663 16:28:43 -- accel/accel.sh@20 -- # IFS=: 00:11:12.663 16:28:43 -- accel/accel.sh@20 -- # read -r var val 00:11:12.663 16:28:43 -- accel/accel.sh@21 -- # val= 00:11:12.663 16:28:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.663 16:28:43 -- accel/accel.sh@20 -- # IFS=: 00:11:12.663 16:28:43 -- accel/accel.sh@20 -- # read -r var val 00:11:12.663 16:28:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:12.663 16:28:43 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:11:12.663 16:28:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:12.663 00:11:12.663 real 0m3.336s 00:11:12.663 user 0m2.749s 00:11:12.663 sys 0m0.410s 00:11:12.663 16:28:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.663 16:28:43 -- common/autotest_common.sh@10 -- # set +x 00:11:12.663 ************************************ 00:11:12.663 END TEST accel_dualcast 00:11:12.663 ************************************ 00:11:12.663 16:28:43 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:11:12.663 16:28:43 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:12.663 16:28:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:12.663 16:28:43 -- common/autotest_common.sh@10 -- # set +x 00:11:12.663 ************************************ 00:11:12.663 START TEST accel_compare 00:11:12.663 ************************************ 00:11:12.663 16:28:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:11:12.663 16:28:43 -- accel/accel.sh@16 -- # local accel_opc 00:11:12.663 16:28:43 -- accel/accel.sh@17 -- # local accel_module 00:11:12.663 16:28:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:11:12.663 16:28:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:12.663 16:28:43 -- accel/accel.sh@12 -- # build_accel_config 00:11:12.663 16:28:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:12.663 16:28:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:12.663 16:28:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:12.663 16:28:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:12.663 16:28:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:12.663 16:28:43 -- accel/accel.sh@41 -- # local IFS=, 00:11:12.663 16:28:43 -- accel/accel.sh@42 -- # jq -r . 00:11:12.663 [2024-07-13 16:28:43.938652] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:12.663 [2024-07-13 16:28:43.938914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118331 ] 00:11:12.663 [2024-07-13 16:28:44.086968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.922 [2024-07-13 16:28:44.168842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.295 16:28:45 -- accel/accel.sh@18 -- # out=' 00:11:14.295 SPDK Configuration: 00:11:14.295 Core mask: 0x1 00:11:14.295 00:11:14.295 Accel Perf Configuration: 00:11:14.295 Workload Type: compare 00:11:14.295 Transfer size: 4096 bytes 00:11:14.295 Vector count 1 00:11:14.295 Module: software 00:11:14.295 Queue depth: 32 00:11:14.295 Allocate depth: 32 00:11:14.295 # threads/core: 1 00:11:14.295 Run time: 1 seconds 00:11:14.295 Verify: Yes 00:11:14.295 00:11:14.295 Running for 1 seconds... 00:11:14.295 00:11:14.295 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:14.295 ------------------------------------------------------------------------------------ 00:11:14.295 0,0 522816/s 2042 MiB/s 0 0 00:11:14.295 ==================================================================================== 00:11:14.295 Total 522816/s 2042 MiB/s 0 0' 00:11:14.295 16:28:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:11:14.295 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.295 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.295 16:28:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:14.295 16:28:45 -- accel/accel.sh@12 -- # build_accel_config 00:11:14.295 16:28:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:14.295 16:28:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:14.295 16:28:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:14.295 16:28:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:14.295 16:28:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:14.295 16:28:45 -- accel/accel.sh@41 -- # local IFS=, 00:11:14.295 16:28:45 -- accel/accel.sh@42 -- # jq -r . 00:11:14.295 [2024-07-13 16:28:45.593826] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:14.295 [2024-07-13 16:28:45.594098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118366 ] 00:11:14.295 [2024-07-13 16:28:45.740463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.553 [2024-07-13 16:28:45.823992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val= 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val= 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val=0x1 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val= 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val= 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val=compare 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@24 -- # accel_opc=compare 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val= 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val=software 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@23 -- # accel_module=software 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val=32 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val=32 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val=1 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val=Yes 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val= 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:14.553 16:28:45 -- accel/accel.sh@21 -- # val= 00:11:14.553 16:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # IFS=: 00:11:14.553 16:28:45 -- accel/accel.sh@20 -- # read -r var val 00:11:15.929 16:28:47 -- accel/accel.sh@21 -- # val= 00:11:15.929 16:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.929 16:28:47 -- accel/accel.sh@20 -- # IFS=: 00:11:15.929 16:28:47 -- accel/accel.sh@20 -- # read -r var val 00:11:15.929 16:28:47 -- accel/accel.sh@21 -- # val= 00:11:15.929 16:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.929 16:28:47 -- accel/accel.sh@20 -- # IFS=: 00:11:15.929 16:28:47 -- accel/accel.sh@20 -- # read -r var val 00:11:15.929 16:28:47 -- accel/accel.sh@21 -- # val= 00:11:15.929 16:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.929 16:28:47 -- accel/accel.sh@20 -- # IFS=: 00:11:15.929 16:28:47 -- accel/accel.sh@20 -- # read -r var val 00:11:15.929 16:28:47 -- accel/accel.sh@21 -- # val= 00:11:15.929 16:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.929 16:28:47 -- accel/accel.sh@20 -- # IFS=: 00:11:15.929 16:28:47 -- accel/accel.sh@20 -- # read -r var val 00:11:15.929 16:28:47 -- accel/accel.sh@21 -- # val= 00:11:15.929 16:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.929 16:28:47 -- accel/accel.sh@20 -- # IFS=: 00:11:15.929 16:28:47 -- accel/accel.sh@20 -- # read -r var val 00:11:15.929 16:28:47 -- accel/accel.sh@21 -- # val= 00:11:15.929 16:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.929 16:28:47 -- accel/accel.sh@20 -- # IFS=: 00:11:15.929 16:28:47 -- accel/accel.sh@20 -- # read -r var val 00:11:15.929 16:28:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:15.929 16:28:47 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:11:15.929 16:28:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:15.929 00:11:15.929 real 0m3.325s 00:11:15.929 user 0m2.757s 00:11:15.929 sys 0m0.408s 00:11:15.929 16:28:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.929 ************************************ 00:11:15.929 END TEST accel_compare 00:11:15.929 16:28:47 -- common/autotest_common.sh@10 -- # set +x 00:11:15.929 ************************************ 00:11:15.929 16:28:47 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:11:15.929 16:28:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:15.929 16:28:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:15.929 16:28:47 -- common/autotest_common.sh@10 -- # set +x 00:11:15.929 ************************************ 00:11:15.929 START TEST accel_xor 00:11:15.929 ************************************ 00:11:15.929 16:28:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:11:15.929 16:28:47 -- accel/accel.sh@16 -- # local accel_opc 00:11:15.929 16:28:47 -- accel/accel.sh@17 -- # local accel_module 00:11:15.929 16:28:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:11:15.929 16:28:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:15.929 16:28:47 -- accel/accel.sh@12 -- # build_accel_config 00:11:15.929 16:28:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:15.929 16:28:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:15.929 16:28:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:15.929 16:28:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:15.929 16:28:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:15.929 16:28:47 -- accel/accel.sh@41 -- # local IFS=, 00:11:15.929 16:28:47 -- accel/accel.sh@42 -- # jq -r . 00:11:15.929 [2024-07-13 16:28:47.327128] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:15.929 [2024-07-13 16:28:47.327405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118406 ] 00:11:16.187 [2024-07-13 16:28:47.468120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.187 [2024-07-13 16:28:47.544578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.563 16:28:48 -- accel/accel.sh@18 -- # out=' 00:11:17.563 SPDK Configuration: 00:11:17.563 Core mask: 0x1 00:11:17.563 00:11:17.563 Accel Perf Configuration: 00:11:17.563 Workload Type: xor 00:11:17.563 Source buffers: 2 00:11:17.563 Transfer size: 4096 bytes 00:11:17.563 Vector count 1 00:11:17.563 Module: software 00:11:17.563 Queue depth: 32 00:11:17.563 Allocate depth: 32 00:11:17.563 # threads/core: 1 00:11:17.563 Run time: 1 seconds 00:11:17.563 Verify: Yes 00:11:17.563 00:11:17.563 Running for 1 seconds... 00:11:17.563 00:11:17.563 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:17.563 ------------------------------------------------------------------------------------ 00:11:17.563 0,0 392704/s 1534 MiB/s 0 0 00:11:17.563 ==================================================================================== 00:11:17.563 Total 392704/s 1534 MiB/s 0 0' 00:11:17.563 16:28:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:17.563 16:28:48 -- accel/accel.sh@20 -- # IFS=: 00:11:17.563 16:28:48 -- accel/accel.sh@20 -- # read -r var val 00:11:17.563 16:28:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:17.563 16:28:48 -- accel/accel.sh@12 -- # build_accel_config 00:11:17.563 16:28:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:17.563 16:28:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.563 16:28:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.563 16:28:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:17.563 16:28:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:17.563 16:28:48 -- accel/accel.sh@41 -- # local IFS=, 00:11:17.563 16:28:48 -- accel/accel.sh@42 -- # jq -r . 00:11:17.563 [2024-07-13 16:28:48.973892] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:17.563 [2024-07-13 16:28:48.974085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118434 ] 00:11:17.822 [2024-07-13 16:28:49.119159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.822 [2024-07-13 16:28:49.210828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val= 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val= 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val=0x1 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val= 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val= 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val=xor 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val=2 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val= 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val=software 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@23 -- # accel_module=software 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val=32 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val=32 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val=1 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val=Yes 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val= 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:18.079 16:28:49 -- accel/accel.sh@21 -- # val= 00:11:18.079 16:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # IFS=: 00:11:18.079 16:28:49 -- accel/accel.sh@20 -- # read -r var val 00:11:19.453 16:28:50 -- accel/accel.sh@21 -- # val= 00:11:19.453 16:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.453 16:28:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.453 16:28:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.453 16:28:50 -- accel/accel.sh@21 -- # val= 00:11:19.453 16:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.453 16:28:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.453 16:28:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.453 16:28:50 -- accel/accel.sh@21 -- # val= 00:11:19.453 16:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.453 16:28:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.453 16:28:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.453 16:28:50 -- accel/accel.sh@21 -- # val= 00:11:19.453 16:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.453 16:28:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.453 16:28:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.453 16:28:50 -- accel/accel.sh@21 -- # val= 00:11:19.453 16:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.453 16:28:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.453 16:28:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.453 16:28:50 -- accel/accel.sh@21 -- # val= 00:11:19.453 16:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.453 16:28:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.453 16:28:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.453 16:28:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:19.453 16:28:50 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:19.453 16:28:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:19.453 00:11:19.453 real 0m3.339s 00:11:19.453 user 0m2.786s 00:11:19.453 sys 0m0.385s 00:11:19.453 16:28:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.453 ************************************ 00:11:19.453 END TEST accel_xor 00:11:19.453 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:11:19.453 ************************************ 00:11:19.453 16:28:50 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:19.453 16:28:50 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:19.453 16:28:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:19.453 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:11:19.453 ************************************ 00:11:19.453 START TEST accel_xor 00:11:19.453 ************************************ 00:11:19.453 16:28:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:11:19.453 16:28:50 -- accel/accel.sh@16 -- # local accel_opc 00:11:19.453 16:28:50 -- accel/accel.sh@17 -- # local accel_module 00:11:19.453 16:28:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:11:19.453 16:28:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:19.453 16:28:50 -- accel/accel.sh@12 -- # build_accel_config 00:11:19.453 16:28:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:19.453 16:28:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:19.453 16:28:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:19.453 16:28:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:19.453 16:28:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:19.453 16:28:50 -- accel/accel.sh@41 -- # local IFS=, 00:11:19.453 16:28:50 -- accel/accel.sh@42 -- # jq -r . 00:11:19.453 [2024-07-13 16:28:50.732419] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:19.453 [2024-07-13 16:28:50.732714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118474 ] 00:11:19.453 [2024-07-13 16:28:50.887764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.712 [2024-07-13 16:28:50.971169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.089 16:28:52 -- accel/accel.sh@18 -- # out=' 00:11:21.089 SPDK Configuration: 00:11:21.089 Core mask: 0x1 00:11:21.089 00:11:21.089 Accel Perf Configuration: 00:11:21.089 Workload Type: xor 00:11:21.089 Source buffers: 3 00:11:21.089 Transfer size: 4096 bytes 00:11:21.089 Vector count 1 00:11:21.089 Module: software 00:11:21.089 Queue depth: 32 00:11:21.089 Allocate depth: 32 00:11:21.089 # threads/core: 1 00:11:21.089 Run time: 1 seconds 00:11:21.089 Verify: Yes 00:11:21.089 00:11:21.089 Running for 1 seconds... 00:11:21.089 00:11:21.089 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:21.089 ------------------------------------------------------------------------------------ 00:11:21.089 0,0 371232/s 1450 MiB/s 0 0 00:11:21.089 ==================================================================================== 00:11:21.089 Total 371232/s 1450 MiB/s 0 0' 00:11:21.089 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.089 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.089 16:28:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:21.089 16:28:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:21.089 16:28:52 -- accel/accel.sh@12 -- # build_accel_config 00:11:21.089 16:28:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:21.089 16:28:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:21.089 16:28:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:21.089 16:28:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:21.089 16:28:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:21.089 16:28:52 -- accel/accel.sh@41 -- # local IFS=, 00:11:21.089 16:28:52 -- accel/accel.sh@42 -- # jq -r . 00:11:21.089 [2024-07-13 16:28:52.401589] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:21.089 [2024-07-13 16:28:52.401870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118509 ] 00:11:21.089 [2024-07-13 16:28:52.553164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.348 [2024-07-13 16:28:52.649382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val= 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val= 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val=0x1 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val= 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val= 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val=xor 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val=3 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val= 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val=software 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@23 -- # accel_module=software 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val=32 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val=32 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val=1 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val=Yes 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val= 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:21.348 16:28:52 -- accel/accel.sh@21 -- # val= 00:11:21.348 16:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # IFS=: 00:11:21.348 16:28:52 -- accel/accel.sh@20 -- # read -r var val 00:11:22.721 16:28:54 -- accel/accel.sh@21 -- # val= 00:11:22.721 16:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.721 16:28:54 -- accel/accel.sh@20 -- # IFS=: 00:11:22.721 16:28:54 -- accel/accel.sh@20 -- # read -r var val 00:11:22.721 16:28:54 -- accel/accel.sh@21 -- # val= 00:11:22.721 16:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.721 16:28:54 -- accel/accel.sh@20 -- # IFS=: 00:11:22.721 16:28:54 -- accel/accel.sh@20 -- # read -r var val 00:11:22.721 16:28:54 -- accel/accel.sh@21 -- # val= 00:11:22.721 16:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.721 16:28:54 -- accel/accel.sh@20 -- # IFS=: 00:11:22.721 16:28:54 -- accel/accel.sh@20 -- # read -r var val 00:11:22.721 16:28:54 -- accel/accel.sh@21 -- # val= 00:11:22.721 16:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.721 16:28:54 -- accel/accel.sh@20 -- # IFS=: 00:11:22.721 16:28:54 -- accel/accel.sh@20 -- # read -r var val 00:11:22.721 16:28:54 -- accel/accel.sh@21 -- # val= 00:11:22.721 16:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.721 16:28:54 -- accel/accel.sh@20 -- # IFS=: 00:11:22.721 16:28:54 -- accel/accel.sh@20 -- # read -r var val 00:11:22.721 16:28:54 -- accel/accel.sh@21 -- # val= 00:11:22.721 16:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.721 16:28:54 -- accel/accel.sh@20 -- # IFS=: 00:11:22.721 16:28:54 -- accel/accel.sh@20 -- # read -r var val 00:11:22.721 16:28:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:22.721 16:28:54 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:22.721 16:28:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:22.721 00:11:22.721 real 0m3.374s 00:11:22.721 user 0m2.748s 00:11:22.721 sys 0m0.419s 00:11:22.721 16:28:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.721 16:28:54 -- common/autotest_common.sh@10 -- # set +x 00:11:22.721 ************************************ 00:11:22.721 END TEST accel_xor 00:11:22.721 ************************************ 00:11:22.721 16:28:54 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:22.721 16:28:54 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:22.721 16:28:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.721 16:28:54 -- common/autotest_common.sh@10 -- # set +x 00:11:22.721 ************************************ 00:11:22.721 START TEST accel_dif_verify 00:11:22.721 ************************************ 00:11:22.721 16:28:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:11:22.721 16:28:54 -- accel/accel.sh@16 -- # local accel_opc 00:11:22.721 16:28:54 -- accel/accel.sh@17 -- # local accel_module 00:11:22.721 16:28:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:11:22.721 16:28:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:22.721 16:28:54 -- accel/accel.sh@12 -- # build_accel_config 00:11:22.721 16:28:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:22.721 16:28:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:22.721 16:28:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:22.721 16:28:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:22.721 16:28:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:22.721 16:28:54 -- accel/accel.sh@41 -- # local IFS=, 00:11:22.721 16:28:54 -- accel/accel.sh@42 -- # jq -r . 00:11:22.721 [2024-07-13 16:28:54.174808] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:22.721 [2024-07-13 16:28:54.176045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118542 ] 00:11:23.033 [2024-07-13 16:28:54.344781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.033 [2024-07-13 16:28:54.426729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.407 16:28:55 -- accel/accel.sh@18 -- # out=' 00:11:24.407 SPDK Configuration: 00:11:24.407 Core mask: 0x1 00:11:24.407 00:11:24.407 Accel Perf Configuration: 00:11:24.407 Workload Type: dif_verify 00:11:24.407 Vector size: 4096 bytes 00:11:24.407 Transfer size: 4096 bytes 00:11:24.407 Block size: 512 bytes 00:11:24.407 Metadata size: 8 bytes 00:11:24.407 Vector count 1 00:11:24.407 Module: software 00:11:24.407 Queue depth: 32 00:11:24.407 Allocate depth: 32 00:11:24.407 # threads/core: 1 00:11:24.407 Run time: 1 seconds 00:11:24.407 Verify: No 00:11:24.407 00:11:24.407 Running for 1 seconds... 00:11:24.407 00:11:24.407 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:24.407 ------------------------------------------------------------------------------------ 00:11:24.407 0,0 115424/s 457 MiB/s 0 0 00:11:24.407 ==================================================================================== 00:11:24.407 Total 115424/s 450 MiB/s 0 0' 00:11:24.407 16:28:55 -- accel/accel.sh@20 -- # IFS=: 00:11:24.407 16:28:55 -- accel/accel.sh@20 -- # read -r var val 00:11:24.407 16:28:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:24.407 16:28:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:24.407 16:28:55 -- accel/accel.sh@12 -- # build_accel_config 00:11:24.407 16:28:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:24.407 16:28:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:24.407 16:28:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:24.407 16:28:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:24.407 16:28:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:24.407 16:28:55 -- accel/accel.sh@41 -- # local IFS=, 00:11:24.407 16:28:55 -- accel/accel.sh@42 -- # jq -r . 00:11:24.408 [2024-07-13 16:28:55.865535] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:24.408 [2024-07-13 16:28:55.866583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118577 ] 00:11:24.665 [2024-07-13 16:28:56.027177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.665 [2024-07-13 16:28:56.116971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val= 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val= 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val=0x1 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val= 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val= 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val=dif_verify 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val= 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val=software 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@23 -- # accel_module=software 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val=32 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val=32 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val=1 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val=No 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val= 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:24.924 16:28:56 -- accel/accel.sh@21 -- # val= 00:11:24.924 16:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # IFS=: 00:11:24.924 16:28:56 -- accel/accel.sh@20 -- # read -r var val 00:11:26.301 16:28:57 -- accel/accel.sh@21 -- # val= 00:11:26.301 16:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.301 16:28:57 -- accel/accel.sh@20 -- # IFS=: 00:11:26.301 16:28:57 -- accel/accel.sh@20 -- # read -r var val 00:11:26.301 16:28:57 -- accel/accel.sh@21 -- # val= 00:11:26.301 16:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.301 16:28:57 -- accel/accel.sh@20 -- # IFS=: 00:11:26.301 16:28:57 -- accel/accel.sh@20 -- # read -r var val 00:11:26.301 16:28:57 -- accel/accel.sh@21 -- # val= 00:11:26.301 16:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.301 16:28:57 -- accel/accel.sh@20 -- # IFS=: 00:11:26.301 16:28:57 -- accel/accel.sh@20 -- # read -r var val 00:11:26.301 16:28:57 -- accel/accel.sh@21 -- # val= 00:11:26.301 16:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.301 16:28:57 -- accel/accel.sh@20 -- # IFS=: 00:11:26.301 16:28:57 -- accel/accel.sh@20 -- # read -r var val 00:11:26.301 16:28:57 -- accel/accel.sh@21 -- # val= 00:11:26.301 16:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.301 16:28:57 -- accel/accel.sh@20 -- # IFS=: 00:11:26.301 16:28:57 -- accel/accel.sh@20 -- # read -r var val 00:11:26.301 16:28:57 -- accel/accel.sh@21 -- # val= 00:11:26.301 16:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.301 16:28:57 -- accel/accel.sh@20 -- # IFS=: 00:11:26.301 16:28:57 -- accel/accel.sh@20 -- # read -r var val 00:11:26.301 ************************************ 00:11:26.301 END TEST accel_dif_verify 00:11:26.301 ************************************ 00:11:26.301 16:28:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:26.301 16:28:57 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:11:26.301 16:28:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:26.301 00:11:26.301 real 0m3.403s 00:11:26.301 user 0m2.800s 00:11:26.301 sys 0m0.416s 00:11:26.301 16:28:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.301 16:28:57 -- common/autotest_common.sh@10 -- # set +x 00:11:26.301 16:28:57 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:26.301 16:28:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:26.301 16:28:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:26.301 16:28:57 -- common/autotest_common.sh@10 -- # set +x 00:11:26.301 ************************************ 00:11:26.301 START TEST accel_dif_generate 00:11:26.301 ************************************ 00:11:26.301 16:28:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:11:26.301 16:28:57 -- accel/accel.sh@16 -- # local accel_opc 00:11:26.301 16:28:57 -- accel/accel.sh@17 -- # local accel_module 00:11:26.301 16:28:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:11:26.301 16:28:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:26.301 16:28:57 -- accel/accel.sh@12 -- # build_accel_config 00:11:26.301 16:28:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:26.301 16:28:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:26.301 16:28:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:26.301 16:28:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:26.301 16:28:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:26.301 16:28:57 -- accel/accel.sh@41 -- # local IFS=, 00:11:26.301 16:28:57 -- accel/accel.sh@42 -- # jq -r . 00:11:26.301 [2024-07-13 16:28:57.642410] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:26.301 [2024-07-13 16:28:57.643268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118623 ] 00:11:26.559 [2024-07-13 16:28:57.799167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.559 [2024-07-13 16:28:57.868459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.937 16:28:59 -- accel/accel.sh@18 -- # out=' 00:11:27.937 SPDK Configuration: 00:11:27.937 Core mask: 0x1 00:11:27.937 00:11:27.937 Accel Perf Configuration: 00:11:27.937 Workload Type: dif_generate 00:11:27.937 Vector size: 4096 bytes 00:11:27.937 Transfer size: 4096 bytes 00:11:27.937 Block size: 512 bytes 00:11:27.937 Metadata size: 8 bytes 00:11:27.937 Vector count 1 00:11:27.937 Module: software 00:11:27.937 Queue depth: 32 00:11:27.937 Allocate depth: 32 00:11:27.937 # threads/core: 1 00:11:27.937 Run time: 1 seconds 00:11:27.937 Verify: No 00:11:27.937 00:11:27.937 Running for 1 seconds... 00:11:27.937 00:11:27.937 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:27.937 ------------------------------------------------------------------------------------ 00:11:27.937 0,0 143200/s 568 MiB/s 0 0 00:11:27.937 ==================================================================================== 00:11:27.937 Total 143200/s 559 MiB/s 0 0' 00:11:27.937 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:27.937 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:27.937 16:28:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:27.937 16:28:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:27.937 16:28:59 -- accel/accel.sh@12 -- # build_accel_config 00:11:27.937 16:28:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:27.937 16:28:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:27.937 16:28:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:27.937 16:28:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:27.937 16:28:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:27.937 16:28:59 -- accel/accel.sh@41 -- # local IFS=, 00:11:27.937 16:28:59 -- accel/accel.sh@42 -- # jq -r . 00:11:27.937 [2024-07-13 16:28:59.306210] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:27.937 [2024-07-13 16:28:59.307017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118645 ] 00:11:28.195 [2024-07-13 16:28:59.462773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.195 [2024-07-13 16:28:59.553618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.195 16:28:59 -- accel/accel.sh@21 -- # val= 00:11:28.195 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.195 16:28:59 -- accel/accel.sh@21 -- # val= 00:11:28.195 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.195 16:28:59 -- accel/accel.sh@21 -- # val=0x1 00:11:28.195 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.195 16:28:59 -- accel/accel.sh@21 -- # val= 00:11:28.195 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.195 16:28:59 -- accel/accel.sh@21 -- # val= 00:11:28.195 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.195 16:28:59 -- accel/accel.sh@21 -- # val=dif_generate 00:11:28.195 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.195 16:28:59 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.195 16:28:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:28.195 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.195 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.196 16:28:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:28.196 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.196 16:28:59 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:28.196 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.196 16:28:59 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:28.196 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.196 16:28:59 -- accel/accel.sh@21 -- # val= 00:11:28.196 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.196 16:28:59 -- accel/accel.sh@21 -- # val=software 00:11:28.196 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.196 16:28:59 -- accel/accel.sh@23 -- # accel_module=software 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.196 16:28:59 -- accel/accel.sh@21 -- # val=32 00:11:28.196 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.196 16:28:59 -- accel/accel.sh@21 -- # val=32 00:11:28.196 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.196 16:28:59 -- accel/accel.sh@21 -- # val=1 00:11:28.196 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.196 16:28:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:28.196 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.196 16:28:59 -- accel/accel.sh@21 -- # val=No 00:11:28.196 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.196 16:28:59 -- accel/accel.sh@21 -- # val= 00:11:28.196 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:28.196 16:28:59 -- accel/accel.sh@21 -- # val= 00:11:28.196 16:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # IFS=: 00:11:28.196 16:28:59 -- accel/accel.sh@20 -- # read -r var val 00:11:29.573 16:29:00 -- accel/accel.sh@21 -- # val= 00:11:29.573 16:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.573 16:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:29.573 16:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:29.573 16:29:00 -- accel/accel.sh@21 -- # val= 00:11:29.573 16:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.573 16:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:29.573 16:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:29.573 16:29:00 -- accel/accel.sh@21 -- # val= 00:11:29.573 16:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.573 16:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:29.573 16:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:29.573 16:29:00 -- accel/accel.sh@21 -- # val= 00:11:29.573 16:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.573 16:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:29.573 16:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:29.573 16:29:00 -- accel/accel.sh@21 -- # val= 00:11:29.573 16:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.573 16:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:29.573 16:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:29.573 16:29:00 -- accel/accel.sh@21 -- # val= 00:11:29.573 16:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.573 16:29:00 -- accel/accel.sh@20 -- # IFS=: 00:11:29.573 16:29:00 -- accel/accel.sh@20 -- # read -r var val 00:11:29.573 16:29:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:29.573 16:29:00 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:11:29.573 16:29:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:29.573 ************************************ 00:11:29.573 END TEST accel_dif_generate 00:11:29.573 ************************************ 00:11:29.573 00:11:29.573 real 0m3.370s 00:11:29.573 user 0m2.753s 00:11:29.573 sys 0m0.421s 00:11:29.573 16:29:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.573 16:29:00 -- common/autotest_common.sh@10 -- # set +x 00:11:29.573 16:29:01 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:29.573 16:29:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:29.573 16:29:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:29.573 16:29:01 -- common/autotest_common.sh@10 -- # set +x 00:11:29.573 ************************************ 00:11:29.573 START TEST accel_dif_generate_copy 00:11:29.573 ************************************ 00:11:29.573 16:29:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:11:29.573 16:29:01 -- accel/accel.sh@16 -- # local accel_opc 00:11:29.573 16:29:01 -- accel/accel.sh@17 -- # local accel_module 00:11:29.573 16:29:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:11:29.573 16:29:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:29.833 16:29:01 -- accel/accel.sh@12 -- # build_accel_config 00:11:29.833 16:29:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:29.833 16:29:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:29.833 16:29:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:29.833 16:29:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:29.833 16:29:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:29.833 16:29:01 -- accel/accel.sh@41 -- # local IFS=, 00:11:29.833 16:29:01 -- accel/accel.sh@42 -- # jq -r . 00:11:29.833 [2024-07-13 16:29:01.070573] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:29.833 [2024-07-13 16:29:01.070903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118690 ] 00:11:29.833 [2024-07-13 16:29:01.220582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.833 [2024-07-13 16:29:01.294565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.740 16:29:02 -- accel/accel.sh@18 -- # out=' 00:11:31.740 SPDK Configuration: 00:11:31.740 Core mask: 0x1 00:11:31.740 00:11:31.740 Accel Perf Configuration: 00:11:31.740 Workload Type: dif_generate_copy 00:11:31.740 Vector size: 4096 bytes 00:11:31.740 Transfer size: 4096 bytes 00:11:31.740 Vector count 1 00:11:31.740 Module: software 00:11:31.740 Queue depth: 32 00:11:31.740 Allocate depth: 32 00:11:31.740 # threads/core: 1 00:11:31.740 Run time: 1 seconds 00:11:31.740 Verify: No 00:11:31.740 00:11:31.740 Running for 1 seconds... 00:11:31.740 00:11:31.740 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:31.740 ------------------------------------------------------------------------------------ 00:11:31.740 0,0 110272/s 437 MiB/s 0 0 00:11:31.740 ==================================================================================== 00:11:31.740 Total 110272/s 430 MiB/s 0 0' 00:11:31.740 16:29:02 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:02 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:31.740 16:29:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:31.740 16:29:02 -- accel/accel.sh@12 -- # build_accel_config 00:11:31.740 16:29:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:31.740 16:29:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:31.740 16:29:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:31.740 16:29:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:31.740 16:29:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:31.740 16:29:02 -- accel/accel.sh@41 -- # local IFS=, 00:11:31.740 16:29:02 -- accel/accel.sh@42 -- # jq -r . 00:11:31.740 [2024-07-13 16:29:02.722698] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:31.740 [2024-07-13 16:29:02.722971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118722 ] 00:11:31.740 [2024-07-13 16:29:02.872548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.740 [2024-07-13 16:29:02.956720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val= 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val= 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val=0x1 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val= 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val= 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val= 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val=software 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@23 -- # accel_module=software 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val=32 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val=32 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val=1 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val=No 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val= 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:31.740 16:29:03 -- accel/accel.sh@21 -- # val= 00:11:31.740 16:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # IFS=: 00:11:31.740 16:29:03 -- accel/accel.sh@20 -- # read -r var val 00:11:33.116 16:29:04 -- accel/accel.sh@21 -- # val= 00:11:33.116 16:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.116 16:29:04 -- accel/accel.sh@20 -- # IFS=: 00:11:33.116 16:29:04 -- accel/accel.sh@20 -- # read -r var val 00:11:33.116 16:29:04 -- accel/accel.sh@21 -- # val= 00:11:33.116 16:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.116 16:29:04 -- accel/accel.sh@20 -- # IFS=: 00:11:33.116 16:29:04 -- accel/accel.sh@20 -- # read -r var val 00:11:33.116 16:29:04 -- accel/accel.sh@21 -- # val= 00:11:33.116 16:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.116 16:29:04 -- accel/accel.sh@20 -- # IFS=: 00:11:33.116 16:29:04 -- accel/accel.sh@20 -- # read -r var val 00:11:33.116 16:29:04 -- accel/accel.sh@21 -- # val= 00:11:33.116 16:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.116 16:29:04 -- accel/accel.sh@20 -- # IFS=: 00:11:33.116 16:29:04 -- accel/accel.sh@20 -- # read -r var val 00:11:33.116 16:29:04 -- accel/accel.sh@21 -- # val= 00:11:33.116 16:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.116 16:29:04 -- accel/accel.sh@20 -- # IFS=: 00:11:33.116 16:29:04 -- accel/accel.sh@20 -- # read -r var val 00:11:33.116 16:29:04 -- accel/accel.sh@21 -- # val= 00:11:33.116 16:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.116 16:29:04 -- accel/accel.sh@20 -- # IFS=: 00:11:33.116 16:29:04 -- accel/accel.sh@20 -- # read -r var val 00:11:33.116 16:29:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:33.116 16:29:04 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:11:33.116 ************************************ 00:11:33.116 END TEST accel_dif_generate_copy 00:11:33.116 ************************************ 00:11:33.116 16:29:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:33.116 00:11:33.116 real 0m3.336s 00:11:33.116 user 0m2.756s 00:11:33.116 sys 0m0.400s 00:11:33.116 16:29:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.116 16:29:04 -- common/autotest_common.sh@10 -- # set +x 00:11:33.116 16:29:04 -- accel/accel.sh@107 -- # [[ y == y ]] 00:11:33.116 16:29:04 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:33.116 16:29:04 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:33.116 16:29:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:33.116 16:29:04 -- common/autotest_common.sh@10 -- # set +x 00:11:33.116 ************************************ 00:11:33.117 START TEST accel_comp 00:11:33.117 ************************************ 00:11:33.117 16:29:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:33.117 16:29:04 -- accel/accel.sh@16 -- # local accel_opc 00:11:33.117 16:29:04 -- accel/accel.sh@17 -- # local accel_module 00:11:33.117 16:29:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:33.117 16:29:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:33.117 16:29:04 -- accel/accel.sh@12 -- # build_accel_config 00:11:33.117 16:29:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:33.117 16:29:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:33.117 16:29:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:33.117 16:29:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:33.117 16:29:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:33.117 16:29:04 -- accel/accel.sh@41 -- # local IFS=, 00:11:33.117 16:29:04 -- accel/accel.sh@42 -- # jq -r . 00:11:33.117 [2024-07-13 16:29:04.476617] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:33.117 [2024-07-13 16:29:04.477461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118760 ] 00:11:33.375 [2024-07-13 16:29:04.628622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.375 [2024-07-13 16:29:04.702909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.750 16:29:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:34.750 00:11:34.750 SPDK Configuration: 00:11:34.750 Core mask: 0x1 00:11:34.750 00:11:34.750 Accel Perf Configuration: 00:11:34.750 Workload Type: compress 00:11:34.750 Transfer size: 4096 bytes 00:11:34.750 Vector count 1 00:11:34.750 Module: software 00:11:34.750 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:34.750 Queue depth: 32 00:11:34.750 Allocate depth: 32 00:11:34.750 # threads/core: 1 00:11:34.750 Run time: 1 seconds 00:11:34.750 Verify: No 00:11:34.750 00:11:34.750 Running for 1 seconds... 00:11:34.750 00:11:34.750 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:34.750 ------------------------------------------------------------------------------------ 00:11:34.750 0,0 60096/s 250 MiB/s 0 0 00:11:34.750 ==================================================================================== 00:11:34.750 Total 60096/s 234 MiB/s 0 0' 00:11:34.750 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:34.750 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:34.750 16:29:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:34.750 16:29:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:34.750 16:29:06 -- accel/accel.sh@12 -- # build_accel_config 00:11:34.750 16:29:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:34.750 16:29:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:34.750 16:29:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:34.750 16:29:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:34.750 16:29:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:34.750 16:29:06 -- accel/accel.sh@41 -- # local IFS=, 00:11:34.750 16:29:06 -- accel/accel.sh@42 -- # jq -r . 00:11:34.750 [2024-07-13 16:29:06.130541] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:34.751 [2024-07-13 16:29:06.130794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118790 ] 00:11:35.009 [2024-07-13 16:29:06.279890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.009 [2024-07-13 16:29:06.390083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val= 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val= 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val= 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val=0x1 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val= 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val= 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val=compress 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val= 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val=software 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@23 -- # accel_module=software 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val=32 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val=32 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val=1 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val=No 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val= 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:35.267 16:29:06 -- accel/accel.sh@21 -- # val= 00:11:35.267 16:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # IFS=: 00:11:35.267 16:29:06 -- accel/accel.sh@20 -- # read -r var val 00:11:36.645 16:29:07 -- accel/accel.sh@21 -- # val= 00:11:36.645 16:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.645 16:29:07 -- accel/accel.sh@20 -- # IFS=: 00:11:36.645 16:29:07 -- accel/accel.sh@20 -- # read -r var val 00:11:36.645 16:29:07 -- accel/accel.sh@21 -- # val= 00:11:36.645 16:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.645 16:29:07 -- accel/accel.sh@20 -- # IFS=: 00:11:36.645 16:29:07 -- accel/accel.sh@20 -- # read -r var val 00:11:36.645 16:29:07 -- accel/accel.sh@21 -- # val= 00:11:36.645 16:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.645 16:29:07 -- accel/accel.sh@20 -- # IFS=: 00:11:36.645 16:29:07 -- accel/accel.sh@20 -- # read -r var val 00:11:36.645 16:29:07 -- accel/accel.sh@21 -- # val= 00:11:36.645 16:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.645 16:29:07 -- accel/accel.sh@20 -- # IFS=: 00:11:36.645 16:29:07 -- accel/accel.sh@20 -- # read -r var val 00:11:36.645 16:29:07 -- accel/accel.sh@21 -- # val= 00:11:36.645 16:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.645 16:29:07 -- accel/accel.sh@20 -- # IFS=: 00:11:36.645 16:29:07 -- accel/accel.sh@20 -- # read -r var val 00:11:36.645 16:29:07 -- accel/accel.sh@21 -- # val= 00:11:36.645 16:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:36.645 16:29:07 -- accel/accel.sh@20 -- # IFS=: 00:11:36.645 16:29:07 -- accel/accel.sh@20 -- # read -r var val 00:11:36.645 16:29:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:36.645 16:29:07 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:36.645 16:29:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:36.645 00:11:36.645 real 0m3.381s 00:11:36.645 user 0m2.765s 00:11:36.645 sys 0m0.419s 00:11:36.645 16:29:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.645 16:29:07 -- common/autotest_common.sh@10 -- # set +x 00:11:36.645 ************************************ 00:11:36.645 END TEST accel_comp 00:11:36.645 ************************************ 00:11:36.645 16:29:07 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:36.645 16:29:07 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:36.645 16:29:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:36.645 16:29:07 -- common/autotest_common.sh@10 -- # set +x 00:11:36.645 ************************************ 00:11:36.645 START TEST accel_decomp 00:11:36.645 ************************************ 00:11:36.645 16:29:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:36.645 16:29:07 -- accel/accel.sh@16 -- # local accel_opc 00:11:36.645 16:29:07 -- accel/accel.sh@17 -- # local accel_module 00:11:36.645 16:29:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:36.645 16:29:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:36.645 16:29:07 -- accel/accel.sh@12 -- # build_accel_config 00:11:36.645 16:29:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:36.645 16:29:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:36.645 16:29:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:36.645 16:29:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:36.645 16:29:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:36.645 16:29:07 -- accel/accel.sh@41 -- # local IFS=, 00:11:36.645 16:29:07 -- accel/accel.sh@42 -- # jq -r . 00:11:36.645 [2024-07-13 16:29:07.920951] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:36.645 [2024-07-13 16:29:07.921824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118835 ] 00:11:36.645 [2024-07-13 16:29:08.076344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.904 [2024-07-13 16:29:08.150219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.281 16:29:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:38.281 00:11:38.281 SPDK Configuration: 00:11:38.281 Core mask: 0x1 00:11:38.281 00:11:38.281 Accel Perf Configuration: 00:11:38.281 Workload Type: decompress 00:11:38.281 Transfer size: 4096 bytes 00:11:38.281 Vector count 1 00:11:38.281 Module: software 00:11:38.281 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:38.281 Queue depth: 32 00:11:38.281 Allocate depth: 32 00:11:38.281 # threads/core: 1 00:11:38.281 Run time: 1 seconds 00:11:38.281 Verify: Yes 00:11:38.281 00:11:38.281 Running for 1 seconds... 00:11:38.281 00:11:38.282 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:38.282 ------------------------------------------------------------------------------------ 00:11:38.282 0,0 65696/s 121 MiB/s 0 0 00:11:38.282 ==================================================================================== 00:11:38.282 Total 65696/s 256 MiB/s 0 0' 00:11:38.282 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.282 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.282 16:29:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:38.282 16:29:09 -- accel/accel.sh@12 -- # build_accel_config 00:11:38.282 16:29:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:38.282 16:29:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:38.282 16:29:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:38.282 16:29:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:38.282 16:29:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:38.282 16:29:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:38.282 16:29:09 -- accel/accel.sh@41 -- # local IFS=, 00:11:38.282 16:29:09 -- accel/accel.sh@42 -- # jq -r . 00:11:38.282 [2024-07-13 16:29:09.592242] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:38.282 [2024-07-13 16:29:09.592541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118858 ] 00:11:38.282 [2024-07-13 16:29:09.742301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.540 [2024-07-13 16:29:09.832880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.540 16:29:09 -- accel/accel.sh@21 -- # val= 00:11:38.540 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.540 16:29:09 -- accel/accel.sh@21 -- # val= 00:11:38.540 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.540 16:29:09 -- accel/accel.sh@21 -- # val= 00:11:38.540 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.540 16:29:09 -- accel/accel.sh@21 -- # val=0x1 00:11:38.540 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.540 16:29:09 -- accel/accel.sh@21 -- # val= 00:11:38.540 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.540 16:29:09 -- accel/accel.sh@21 -- # val= 00:11:38.540 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.540 16:29:09 -- accel/accel.sh@21 -- # val=decompress 00:11:38.540 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.540 16:29:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.540 16:29:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:38.540 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.540 16:29:09 -- accel/accel.sh@21 -- # val= 00:11:38.540 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.540 16:29:09 -- accel/accel.sh@21 -- # val=software 00:11:38.540 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.540 16:29:09 -- accel/accel.sh@23 -- # accel_module=software 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.540 16:29:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:38.540 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.540 16:29:09 -- accel/accel.sh@21 -- # val=32 00:11:38.540 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.540 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.541 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.541 16:29:09 -- accel/accel.sh@21 -- # val=32 00:11:38.541 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.541 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.541 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.541 16:29:09 -- accel/accel.sh@21 -- # val=1 00:11:38.541 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.541 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.541 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.541 16:29:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:38.541 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.541 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.541 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.541 16:29:09 -- accel/accel.sh@21 -- # val=Yes 00:11:38.541 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.541 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.541 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.541 16:29:09 -- accel/accel.sh@21 -- # val= 00:11:38.541 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.541 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.541 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:38.541 16:29:09 -- accel/accel.sh@21 -- # val= 00:11:38.541 16:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.541 16:29:09 -- accel/accel.sh@20 -- # IFS=: 00:11:38.541 16:29:09 -- accel/accel.sh@20 -- # read -r var val 00:11:39.944 16:29:11 -- accel/accel.sh@21 -- # val= 00:11:39.944 16:29:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.944 16:29:11 -- accel/accel.sh@20 -- # IFS=: 00:11:39.944 16:29:11 -- accel/accel.sh@20 -- # read -r var val 00:11:39.944 16:29:11 -- accel/accel.sh@21 -- # val= 00:11:39.944 16:29:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.944 16:29:11 -- accel/accel.sh@20 -- # IFS=: 00:11:39.944 16:29:11 -- accel/accel.sh@20 -- # read -r var val 00:11:39.944 16:29:11 -- accel/accel.sh@21 -- # val= 00:11:39.944 16:29:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.944 16:29:11 -- accel/accel.sh@20 -- # IFS=: 00:11:39.944 16:29:11 -- accel/accel.sh@20 -- # read -r var val 00:11:39.944 16:29:11 -- accel/accel.sh@21 -- # val= 00:11:39.944 16:29:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.944 16:29:11 -- accel/accel.sh@20 -- # IFS=: 00:11:39.944 16:29:11 -- accel/accel.sh@20 -- # read -r var val 00:11:39.944 16:29:11 -- accel/accel.sh@21 -- # val= 00:11:39.944 16:29:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.944 16:29:11 -- accel/accel.sh@20 -- # IFS=: 00:11:39.944 16:29:11 -- accel/accel.sh@20 -- # read -r var val 00:11:39.944 16:29:11 -- accel/accel.sh@21 -- # val= 00:11:39.944 16:29:11 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.944 16:29:11 -- accel/accel.sh@20 -- # IFS=: 00:11:39.944 16:29:11 -- accel/accel.sh@20 -- # read -r var val 00:11:39.944 16:29:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:39.944 16:29:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:39.944 16:29:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:39.944 00:11:39.944 real 0m3.379s 00:11:39.944 user 0m2.759s 00:11:39.944 sys 0m0.439s 00:11:39.944 16:29:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.944 ************************************ 00:11:39.944 END TEST accel_decomp 00:11:39.944 ************************************ 00:11:39.944 16:29:11 -- common/autotest_common.sh@10 -- # set +x 00:11:39.944 16:29:11 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:39.944 16:29:11 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:39.944 16:29:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:39.944 16:29:11 -- common/autotest_common.sh@10 -- # set +x 00:11:39.944 ************************************ 00:11:39.944 START TEST accel_decmop_full 00:11:39.944 ************************************ 00:11:39.944 16:29:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:39.944 16:29:11 -- accel/accel.sh@16 -- # local accel_opc 00:11:39.944 16:29:11 -- accel/accel.sh@17 -- # local accel_module 00:11:39.944 16:29:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:39.944 16:29:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:39.944 16:29:11 -- accel/accel.sh@12 -- # build_accel_config 00:11:39.944 16:29:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:39.944 16:29:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:39.944 16:29:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:39.944 16:29:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:39.944 16:29:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:39.944 16:29:11 -- accel/accel.sh@41 -- # local IFS=, 00:11:39.944 16:29:11 -- accel/accel.sh@42 -- # jq -r . 00:11:39.944 [2024-07-13 16:29:11.353021] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:39.944 [2024-07-13 16:29:11.353219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118903 ] 00:11:40.201 [2024-07-13 16:29:11.496113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.201 [2024-07-13 16:29:11.574127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.575 16:29:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:41.575 00:11:41.575 SPDK Configuration: 00:11:41.575 Core mask: 0x1 00:11:41.575 00:11:41.575 Accel Perf Configuration: 00:11:41.575 Workload Type: decompress 00:11:41.575 Transfer size: 111250 bytes 00:11:41.575 Vector count 1 00:11:41.575 Module: software 00:11:41.575 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:41.575 Queue depth: 32 00:11:41.575 Allocate depth: 32 00:11:41.575 # threads/core: 1 00:11:41.575 Run time: 1 seconds 00:11:41.575 Verify: Yes 00:11:41.575 00:11:41.575 Running for 1 seconds... 00:11:41.575 00:11:41.575 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:41.575 ------------------------------------------------------------------------------------ 00:11:41.575 0,0 4704/s 194 MiB/s 0 0 00:11:41.575 ==================================================================================== 00:11:41.575 Total 4704/s 499 MiB/s 0 0' 00:11:41.575 16:29:12 -- accel/accel.sh@20 -- # IFS=: 00:11:41.575 16:29:12 -- accel/accel.sh@20 -- # read -r var val 00:11:41.575 16:29:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:41.575 16:29:12 -- accel/accel.sh@12 -- # build_accel_config 00:11:41.575 16:29:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:41.575 16:29:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:41.575 16:29:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:41.575 16:29:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:41.575 16:29:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:41.575 16:29:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:41.575 16:29:12 -- accel/accel.sh@41 -- # local IFS=, 00:11:41.575 16:29:12 -- accel/accel.sh@42 -- # jq -r . 00:11:41.575 [2024-07-13 16:29:13.025242] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:41.575 [2024-07-13 16:29:13.026095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118939 ] 00:11:41.834 [2024-07-13 16:29:13.180474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.834 [2024-07-13 16:29:13.273217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.091 16:29:13 -- accel/accel.sh@21 -- # val= 00:11:42.091 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.091 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.091 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.091 16:29:13 -- accel/accel.sh@21 -- # val= 00:11:42.091 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.091 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.091 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.091 16:29:13 -- accel/accel.sh@21 -- # val= 00:11:42.091 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.091 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.091 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.091 16:29:13 -- accel/accel.sh@21 -- # val=0x1 00:11:42.091 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val= 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val= 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val=decompress 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val= 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val=software 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@23 -- # accel_module=software 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val=32 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val=32 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val=1 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val=Yes 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val= 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:42.092 16:29:13 -- accel/accel.sh@21 -- # val= 00:11:42.092 16:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # IFS=: 00:11:42.092 16:29:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.464 16:29:14 -- accel/accel.sh@21 -- # val= 00:11:43.465 16:29:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.465 16:29:14 -- accel/accel.sh@20 -- # IFS=: 00:11:43.465 16:29:14 -- accel/accel.sh@20 -- # read -r var val 00:11:43.465 16:29:14 -- accel/accel.sh@21 -- # val= 00:11:43.465 16:29:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.465 16:29:14 -- accel/accel.sh@20 -- # IFS=: 00:11:43.465 16:29:14 -- accel/accel.sh@20 -- # read -r var val 00:11:43.465 16:29:14 -- accel/accel.sh@21 -- # val= 00:11:43.465 16:29:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.465 16:29:14 -- accel/accel.sh@20 -- # IFS=: 00:11:43.465 16:29:14 -- accel/accel.sh@20 -- # read -r var val 00:11:43.465 16:29:14 -- accel/accel.sh@21 -- # val= 00:11:43.465 16:29:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.465 16:29:14 -- accel/accel.sh@20 -- # IFS=: 00:11:43.465 16:29:14 -- accel/accel.sh@20 -- # read -r var val 00:11:43.465 16:29:14 -- accel/accel.sh@21 -- # val= 00:11:43.465 16:29:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.465 16:29:14 -- accel/accel.sh@20 -- # IFS=: 00:11:43.465 16:29:14 -- accel/accel.sh@20 -- # read -r var val 00:11:43.465 16:29:14 -- accel/accel.sh@21 -- # val= 00:11:43.465 16:29:14 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.465 16:29:14 -- accel/accel.sh@20 -- # IFS=: 00:11:43.465 16:29:14 -- accel/accel.sh@20 -- # read -r var val 00:11:43.465 16:29:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:43.465 16:29:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:43.465 16:29:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:43.465 00:11:43.465 real 0m3.368s 00:11:43.465 user 0m2.766s 00:11:43.465 sys 0m0.428s 00:11:43.465 16:29:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.465 ************************************ 00:11:43.465 END TEST accel_decmop_full 00:11:43.465 ************************************ 00:11:43.465 16:29:14 -- common/autotest_common.sh@10 -- # set +x 00:11:43.465 16:29:14 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:43.465 16:29:14 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:43.465 16:29:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:43.465 16:29:14 -- common/autotest_common.sh@10 -- # set +x 00:11:43.465 ************************************ 00:11:43.465 START TEST accel_decomp_mcore 00:11:43.465 ************************************ 00:11:43.465 16:29:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:43.465 16:29:14 -- accel/accel.sh@16 -- # local accel_opc 00:11:43.465 16:29:14 -- accel/accel.sh@17 -- # local accel_module 00:11:43.465 16:29:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:43.465 16:29:14 -- accel/accel.sh@12 -- # build_accel_config 00:11:43.465 16:29:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:43.465 16:29:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:43.465 16:29:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:43.465 16:29:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:43.465 16:29:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:43.465 16:29:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:43.465 16:29:14 -- accel/accel.sh@41 -- # local IFS=, 00:11:43.465 16:29:14 -- accel/accel.sh@42 -- # jq -r . 00:11:43.465 [2024-07-13 16:29:14.793071] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:43.465 [2024-07-13 16:29:14.793367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118973 ] 00:11:43.723 [2024-07-13 16:29:14.965352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.723 [2024-07-13 16:29:15.041271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.723 [2024-07-13 16:29:15.041432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.723 [2024-07-13 16:29:15.041607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.723 [2024-07-13 16:29:15.041671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.098 16:29:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:45.098 00:11:45.098 SPDK Configuration: 00:11:45.098 Core mask: 0xf 00:11:45.098 00:11:45.098 Accel Perf Configuration: 00:11:45.098 Workload Type: decompress 00:11:45.098 Transfer size: 4096 bytes 00:11:45.098 Vector count 1 00:11:45.098 Module: software 00:11:45.098 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:45.098 Queue depth: 32 00:11:45.098 Allocate depth: 32 00:11:45.098 # threads/core: 1 00:11:45.098 Run time: 1 seconds 00:11:45.098 Verify: Yes 00:11:45.098 00:11:45.098 Running for 1 seconds... 00:11:45.098 00:11:45.098 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:45.098 ------------------------------------------------------------------------------------ 00:11:45.098 0,0 55712/s 102 MiB/s 0 0 00:11:45.098 3,0 55392/s 102 MiB/s 0 0 00:11:45.098 2,0 57440/s 105 MiB/s 0 0 00:11:45.098 1,0 58016/s 106 MiB/s 0 0 00:11:45.098 ==================================================================================== 00:11:45.098 Total 226560/s 885 MiB/s 0 0' 00:11:45.098 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.098 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.098 16:29:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:45.098 16:29:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:45.098 16:29:16 -- accel/accel.sh@12 -- # build_accel_config 00:11:45.098 16:29:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:45.098 16:29:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:45.098 16:29:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:45.098 16:29:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:45.098 16:29:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:45.098 16:29:16 -- accel/accel.sh@41 -- # local IFS=, 00:11:45.098 16:29:16 -- accel/accel.sh@42 -- # jq -r . 00:11:45.098 [2024-07-13 16:29:16.473126] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:45.098 [2024-07-13 16:29:16.473343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119011 ] 00:11:45.357 [2024-07-13 16:29:16.634178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.357 [2024-07-13 16:29:16.726168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.357 [2024-07-13 16:29:16.726336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.357 [2024-07-13 16:29:16.727408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.357 [2024-07-13 16:29:16.727502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.616 16:29:16 -- accel/accel.sh@21 -- # val= 00:11:45.616 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.616 16:29:16 -- accel/accel.sh@21 -- # val= 00:11:45.616 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.616 16:29:16 -- accel/accel.sh@21 -- # val= 00:11:45.616 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.616 16:29:16 -- accel/accel.sh@21 -- # val=0xf 00:11:45.616 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.616 16:29:16 -- accel/accel.sh@21 -- # val= 00:11:45.616 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.616 16:29:16 -- accel/accel.sh@21 -- # val= 00:11:45.616 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.616 16:29:16 -- accel/accel.sh@21 -- # val=decompress 00:11:45.616 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.616 16:29:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.616 16:29:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:45.616 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.616 16:29:16 -- accel/accel.sh@21 -- # val= 00:11:45.616 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.616 16:29:16 -- accel/accel.sh@21 -- # val=software 00:11:45.616 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.616 16:29:16 -- accel/accel.sh@23 -- # accel_module=software 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.616 16:29:16 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:45.616 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.616 16:29:16 -- accel/accel.sh@21 -- # val=32 00:11:45.616 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.616 16:29:16 -- accel/accel.sh@21 -- # val=32 00:11:45.616 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.616 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.617 16:29:16 -- accel/accel.sh@21 -- # val=1 00:11:45.617 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.617 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.617 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.617 16:29:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:45.617 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.617 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.617 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.617 16:29:16 -- accel/accel.sh@21 -- # val=Yes 00:11:45.617 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.617 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.617 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.617 16:29:16 -- accel/accel.sh@21 -- # val= 00:11:45.617 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.617 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.617 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:45.617 16:29:16 -- accel/accel.sh@21 -- # val= 00:11:45.617 16:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.617 16:29:16 -- accel/accel.sh@20 -- # IFS=: 00:11:45.617 16:29:16 -- accel/accel.sh@20 -- # read -r var val 00:11:46.994 16:29:18 -- accel/accel.sh@21 -- # val= 00:11:46.994 16:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.994 16:29:18 -- accel/accel.sh@20 -- # IFS=: 00:11:46.994 16:29:18 -- accel/accel.sh@20 -- # read -r var val 00:11:46.994 16:29:18 -- accel/accel.sh@21 -- # val= 00:11:46.994 16:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.994 16:29:18 -- accel/accel.sh@20 -- # IFS=: 00:11:46.994 16:29:18 -- accel/accel.sh@20 -- # read -r var val 00:11:46.994 16:29:18 -- accel/accel.sh@21 -- # val= 00:11:46.994 16:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.994 16:29:18 -- accel/accel.sh@20 -- # IFS=: 00:11:46.994 16:29:18 -- accel/accel.sh@20 -- # read -r var val 00:11:46.994 16:29:18 -- accel/accel.sh@21 -- # val= 00:11:46.994 16:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.995 16:29:18 -- accel/accel.sh@20 -- # IFS=: 00:11:46.995 16:29:18 -- accel/accel.sh@20 -- # read -r var val 00:11:46.995 16:29:18 -- accel/accel.sh@21 -- # val= 00:11:46.995 16:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.995 16:29:18 -- accel/accel.sh@20 -- # IFS=: 00:11:46.995 16:29:18 -- accel/accel.sh@20 -- # read -r var val 00:11:46.995 16:29:18 -- accel/accel.sh@21 -- # val= 00:11:46.995 16:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.995 16:29:18 -- accel/accel.sh@20 -- # IFS=: 00:11:46.995 16:29:18 -- accel/accel.sh@20 -- # read -r var val 00:11:46.995 16:29:18 -- accel/accel.sh@21 -- # val= 00:11:46.995 16:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.995 16:29:18 -- accel/accel.sh@20 -- # IFS=: 00:11:46.995 16:29:18 -- accel/accel.sh@20 -- # read -r var val 00:11:46.995 16:29:18 -- accel/accel.sh@21 -- # val= 00:11:46.995 16:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.995 16:29:18 -- accel/accel.sh@20 -- # IFS=: 00:11:46.995 16:29:18 -- accel/accel.sh@20 -- # read -r var val 00:11:46.995 16:29:18 -- accel/accel.sh@21 -- # val= 00:11:46.995 16:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.995 16:29:18 -- accel/accel.sh@20 -- # IFS=: 00:11:46.995 16:29:18 -- accel/accel.sh@20 -- # read -r var val 00:11:46.995 16:29:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:46.995 16:29:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:46.995 ************************************ 00:11:46.995 END TEST accel_decomp_mcore 00:11:46.995 ************************************ 00:11:46.995 16:29:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:46.995 00:11:46.995 real 0m3.401s 00:11:46.995 user 0m10.090s 00:11:46.995 sys 0m0.468s 00:11:46.995 16:29:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.995 16:29:18 -- common/autotest_common.sh@10 -- # set +x 00:11:46.995 16:29:18 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:46.995 16:29:18 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:46.995 16:29:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:46.995 16:29:18 -- common/autotest_common.sh@10 -- # set +x 00:11:46.995 ************************************ 00:11:46.995 START TEST accel_decomp_full_mcore 00:11:46.995 ************************************ 00:11:46.995 16:29:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:46.995 16:29:18 -- accel/accel.sh@16 -- # local accel_opc 00:11:46.995 16:29:18 -- accel/accel.sh@17 -- # local accel_module 00:11:46.995 16:29:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:46.995 16:29:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:46.995 16:29:18 -- accel/accel.sh@12 -- # build_accel_config 00:11:46.995 16:29:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:46.995 16:29:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:46.995 16:29:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:46.995 16:29:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:46.995 16:29:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:46.995 16:29:18 -- accel/accel.sh@41 -- # local IFS=, 00:11:46.995 16:29:18 -- accel/accel.sh@42 -- # jq -r . 00:11:46.995 [2024-07-13 16:29:18.246843] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:46.995 [2024-07-13 16:29:18.247049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119054 ] 00:11:46.995 [2024-07-13 16:29:18.407872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.254 [2024-07-13 16:29:18.489174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.254 [2024-07-13 16:29:18.489306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.254 [2024-07-13 16:29:18.490527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.254 [2024-07-13 16:29:18.490516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.631 16:29:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:48.631 00:11:48.631 SPDK Configuration: 00:11:48.631 Core mask: 0xf 00:11:48.631 00:11:48.631 Accel Perf Configuration: 00:11:48.631 Workload Type: decompress 00:11:48.631 Transfer size: 111250 bytes 00:11:48.631 Vector count 1 00:11:48.631 Module: software 00:11:48.631 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:48.631 Queue depth: 32 00:11:48.631 Allocate depth: 32 00:11:48.631 # threads/core: 1 00:11:48.631 Run time: 1 seconds 00:11:48.631 Verify: Yes 00:11:48.631 00:11:48.631 Running for 1 seconds... 00:11:48.631 00:11:48.631 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:48.631 ------------------------------------------------------------------------------------ 00:11:48.631 0,0 4832/s 199 MiB/s 0 0 00:11:48.631 3,0 4800/s 198 MiB/s 0 0 00:11:48.631 2,0 4832/s 199 MiB/s 0 0 00:11:48.631 1,0 4832/s 199 MiB/s 0 0 00:11:48.631 ==================================================================================== 00:11:48.631 Total 19296/s 2047 MiB/s 0 0' 00:11:48.631 16:29:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.631 16:29:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.631 16:29:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:48.631 16:29:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:48.631 16:29:19 -- accel/accel.sh@12 -- # build_accel_config 00:11:48.631 16:29:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:48.631 16:29:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:48.631 16:29:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:48.631 16:29:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:48.631 16:29:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:48.631 16:29:19 -- accel/accel.sh@41 -- # local IFS=, 00:11:48.631 16:29:19 -- accel/accel.sh@42 -- # jq -r . 00:11:48.631 [2024-07-13 16:29:19.941561] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:48.631 [2024-07-13 16:29:19.941793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119085 ] 00:11:48.890 [2024-07-13 16:29:20.103411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.890 [2024-07-13 16:29:20.202316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.890 [2024-07-13 16:29:20.202469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.890 [2024-07-13 16:29:20.203845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.890 [2024-07-13 16:29:20.203778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val= 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val= 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val= 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val=0xf 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val= 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val= 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val=decompress 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val= 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val=software 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@23 -- # accel_module=software 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val=32 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val=32 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val=1 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val=Yes 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val= 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:48.890 16:29:20 -- accel/accel.sh@21 -- # val= 00:11:48.890 16:29:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # IFS=: 00:11:48.890 16:29:20 -- accel/accel.sh@20 -- # read -r var val 00:11:50.269 16:29:21 -- accel/accel.sh@21 -- # val= 00:11:50.269 16:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # IFS=: 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # read -r var val 00:11:50.269 16:29:21 -- accel/accel.sh@21 -- # val= 00:11:50.269 16:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # IFS=: 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # read -r var val 00:11:50.269 16:29:21 -- accel/accel.sh@21 -- # val= 00:11:50.269 16:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # IFS=: 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # read -r var val 00:11:50.269 16:29:21 -- accel/accel.sh@21 -- # val= 00:11:50.269 16:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # IFS=: 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # read -r var val 00:11:50.269 16:29:21 -- accel/accel.sh@21 -- # val= 00:11:50.269 16:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # IFS=: 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # read -r var val 00:11:50.269 16:29:21 -- accel/accel.sh@21 -- # val= 00:11:50.269 16:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # IFS=: 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # read -r var val 00:11:50.269 16:29:21 -- accel/accel.sh@21 -- # val= 00:11:50.269 16:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # IFS=: 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # read -r var val 00:11:50.269 16:29:21 -- accel/accel.sh@21 -- # val= 00:11:50.269 16:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # IFS=: 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # read -r var val 00:11:50.269 16:29:21 -- accel/accel.sh@21 -- # val= 00:11:50.269 16:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # IFS=: 00:11:50.269 16:29:21 -- accel/accel.sh@20 -- # read -r var val 00:11:50.269 16:29:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:50.269 16:29:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:50.269 ************************************ 00:11:50.269 END TEST accel_decomp_full_mcore 00:11:50.269 ************************************ 00:11:50.269 16:29:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:50.269 00:11:50.269 real 0m3.417s 00:11:50.269 user 0m10.204s 00:11:50.269 sys 0m0.454s 00:11:50.269 16:29:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.269 16:29:21 -- common/autotest_common.sh@10 -- # set +x 00:11:50.269 16:29:21 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:50.269 16:29:21 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:50.269 16:29:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:50.269 16:29:21 -- common/autotest_common.sh@10 -- # set +x 00:11:50.269 ************************************ 00:11:50.269 START TEST accel_decomp_mthread 00:11:50.269 ************************************ 00:11:50.269 16:29:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:50.269 16:29:21 -- accel/accel.sh@16 -- # local accel_opc 00:11:50.269 16:29:21 -- accel/accel.sh@17 -- # local accel_module 00:11:50.269 16:29:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:50.269 16:29:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:50.269 16:29:21 -- accel/accel.sh@12 -- # build_accel_config 00:11:50.269 16:29:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:50.269 16:29:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:50.269 16:29:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:50.269 16:29:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:50.269 16:29:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:50.269 16:29:21 -- accel/accel.sh@41 -- # local IFS=, 00:11:50.269 16:29:21 -- accel/accel.sh@42 -- # jq -r . 00:11:50.269 [2024-07-13 16:29:21.728174] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:50.269 [2024-07-13 16:29:21.728478] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119128 ] 00:11:50.528 [2024-07-13 16:29:21.881560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.528 [2024-07-13 16:29:21.954272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.901 16:29:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:51.901 00:11:51.901 SPDK Configuration: 00:11:51.901 Core mask: 0x1 00:11:51.901 00:11:51.901 Accel Perf Configuration: 00:11:51.901 Workload Type: decompress 00:11:51.901 Transfer size: 4096 bytes 00:11:51.901 Vector count 1 00:11:51.901 Module: software 00:11:51.901 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:51.901 Queue depth: 32 00:11:51.901 Allocate depth: 32 00:11:51.901 # threads/core: 2 00:11:51.901 Run time: 1 seconds 00:11:51.901 Verify: Yes 00:11:51.901 00:11:51.901 Running for 1 seconds... 00:11:51.901 00:11:51.901 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:51.901 ------------------------------------------------------------------------------------ 00:11:51.901 0,1 33280/s 61 MiB/s 0 0 00:11:51.901 0,0 33120/s 61 MiB/s 0 0 00:11:51.901 ==================================================================================== 00:11:51.901 Total 66400/s 259 MiB/s 0 0' 00:11:51.901 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:51.901 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:51.901 16:29:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:51.901 16:29:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:51.901 16:29:23 -- accel/accel.sh@12 -- # build_accel_config 00:11:51.901 16:29:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:51.901 16:29:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:51.901 16:29:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:51.901 16:29:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:51.901 16:29:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:51.901 16:29:23 -- accel/accel.sh@41 -- # local IFS=, 00:11:51.901 16:29:23 -- accel/accel.sh@42 -- # jq -r . 00:11:52.160 [2024-07-13 16:29:23.404946] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:52.160 [2024-07-13 16:29:23.405242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119165 ] 00:11:52.160 [2024-07-13 16:29:23.560701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.418 [2024-07-13 16:29:23.658136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val= 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val= 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val= 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val=0x1 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val= 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val= 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val=decompress 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val= 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val=software 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@23 -- # accel_module=software 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val=32 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val=32 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val=2 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val=Yes 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val= 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:52.418 16:29:23 -- accel/accel.sh@21 -- # val= 00:11:52.418 16:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # IFS=: 00:11:52.418 16:29:23 -- accel/accel.sh@20 -- # read -r var val 00:11:53.791 16:29:25 -- accel/accel.sh@21 -- # val= 00:11:53.791 16:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # IFS=: 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # read -r var val 00:11:53.791 16:29:25 -- accel/accel.sh@21 -- # val= 00:11:53.791 16:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # IFS=: 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # read -r var val 00:11:53.791 16:29:25 -- accel/accel.sh@21 -- # val= 00:11:53.791 16:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # IFS=: 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # read -r var val 00:11:53.791 16:29:25 -- accel/accel.sh@21 -- # val= 00:11:53.791 16:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # IFS=: 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # read -r var val 00:11:53.791 16:29:25 -- accel/accel.sh@21 -- # val= 00:11:53.791 16:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # IFS=: 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # read -r var val 00:11:53.791 16:29:25 -- accel/accel.sh@21 -- # val= 00:11:53.791 16:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # IFS=: 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # read -r var val 00:11:53.791 16:29:25 -- accel/accel.sh@21 -- # val= 00:11:53.791 16:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # IFS=: 00:11:53.791 16:29:25 -- accel/accel.sh@20 -- # read -r var val 00:11:53.791 ************************************ 00:11:53.791 END TEST accel_decomp_mthread 00:11:53.791 ************************************ 00:11:53.791 16:29:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:53.791 16:29:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:53.791 16:29:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:53.791 00:11:53.791 real 0m3.390s 00:11:53.791 user 0m2.753s 00:11:53.791 sys 0m0.450s 00:11:53.791 16:29:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.791 16:29:25 -- common/autotest_common.sh@10 -- # set +x 00:11:53.791 16:29:25 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:53.791 16:29:25 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:53.791 16:29:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:53.791 16:29:25 -- common/autotest_common.sh@10 -- # set +x 00:11:53.791 ************************************ 00:11:53.791 START TEST accel_deomp_full_mthread 00:11:53.791 ************************************ 00:11:53.791 16:29:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:53.791 16:29:25 -- accel/accel.sh@16 -- # local accel_opc 00:11:53.791 16:29:25 -- accel/accel.sh@17 -- # local accel_module 00:11:53.791 16:29:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:53.791 16:29:25 -- accel/accel.sh@12 -- # build_accel_config 00:11:53.791 16:29:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:53.791 16:29:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:53.791 16:29:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:53.791 16:29:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:53.791 16:29:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:53.791 16:29:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:53.791 16:29:25 -- accel/accel.sh@41 -- # local IFS=, 00:11:53.791 16:29:25 -- accel/accel.sh@42 -- # jq -r . 00:11:53.791 [2024-07-13 16:29:25.172898] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:53.791 [2024-07-13 16:29:25.173358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119211 ] 00:11:54.049 [2024-07-13 16:29:25.327849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.049 [2024-07-13 16:29:25.400557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.426 16:29:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:55.426 00:11:55.426 SPDK Configuration: 00:11:55.426 Core mask: 0x1 00:11:55.426 00:11:55.426 Accel Perf Configuration: 00:11:55.426 Workload Type: decompress 00:11:55.426 Transfer size: 111250 bytes 00:11:55.426 Vector count 1 00:11:55.426 Module: software 00:11:55.426 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:55.426 Queue depth: 32 00:11:55.426 Allocate depth: 32 00:11:55.426 # threads/core: 2 00:11:55.426 Run time: 1 seconds 00:11:55.426 Verify: Yes 00:11:55.426 00:11:55.426 Running for 1 seconds... 00:11:55.426 00:11:55.426 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:55.426 ------------------------------------------------------------------------------------ 00:11:55.426 0,1 2432/s 100 MiB/s 0 0 00:11:55.426 0,0 2368/s 97 MiB/s 0 0 00:11:55.426 ==================================================================================== 00:11:55.426 Total 4800/s 509 MiB/s 0 0' 00:11:55.426 16:29:26 -- accel/accel.sh@20 -- # IFS=: 00:11:55.426 16:29:26 -- accel/accel.sh@20 -- # read -r var val 00:11:55.426 16:29:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:55.426 16:29:26 -- accel/accel.sh@12 -- # build_accel_config 00:11:55.426 16:29:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:55.426 16:29:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:55.426 16:29:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:55.426 16:29:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:55.426 16:29:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:55.426 16:29:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:55.426 16:29:26 -- accel/accel.sh@41 -- # local IFS=, 00:11:55.426 16:29:26 -- accel/accel.sh@42 -- # jq -r . 00:11:55.426 [2024-07-13 16:29:26.869047] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:55.426 [2024-07-13 16:29:26.869567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119234 ] 00:11:55.685 [2024-07-13 16:29:27.024276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.685 [2024-07-13 16:29:27.111543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val= 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val= 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val= 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val=0x1 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val= 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val= 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val=decompress 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val= 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val=software 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@23 -- # accel_module=software 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val=32 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val=32 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val=2 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val=Yes 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val= 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:55.943 16:29:27 -- accel/accel.sh@21 -- # val= 00:11:55.943 16:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # IFS=: 00:11:55.943 16:29:27 -- accel/accel.sh@20 -- # read -r var val 00:11:57.321 16:29:28 -- accel/accel.sh@21 -- # val= 00:11:57.321 16:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # IFS=: 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # read -r var val 00:11:57.321 16:29:28 -- accel/accel.sh@21 -- # val= 00:11:57.321 16:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # IFS=: 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # read -r var val 00:11:57.321 16:29:28 -- accel/accel.sh@21 -- # val= 00:11:57.321 16:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # IFS=: 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # read -r var val 00:11:57.321 16:29:28 -- accel/accel.sh@21 -- # val= 00:11:57.321 16:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # IFS=: 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # read -r var val 00:11:57.321 16:29:28 -- accel/accel.sh@21 -- # val= 00:11:57.321 16:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # IFS=: 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # read -r var val 00:11:57.321 16:29:28 -- accel/accel.sh@21 -- # val= 00:11:57.321 16:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # IFS=: 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # read -r var val 00:11:57.321 16:29:28 -- accel/accel.sh@21 -- # val= 00:11:57.321 16:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # IFS=: 00:11:57.321 16:29:28 -- accel/accel.sh@20 -- # read -r var val 00:11:57.321 ************************************ 00:11:57.321 END TEST accel_deomp_full_mthread 00:11:57.321 ************************************ 00:11:57.321 16:29:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:57.321 16:29:28 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:57.321 16:29:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:57.321 00:11:57.321 real 0m3.423s 00:11:57.321 user 0m2.827s 00:11:57.321 sys 0m0.403s 00:11:57.321 16:29:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.321 16:29:28 -- common/autotest_common.sh@10 -- # set +x 00:11:57.321 16:29:28 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:57.321 16:29:28 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:57.321 16:29:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:57.321 16:29:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:57.321 16:29:28 -- accel/accel.sh@129 -- # build_accel_config 00:11:57.321 16:29:28 -- common/autotest_common.sh@10 -- # set +x 00:11:57.321 16:29:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:57.321 16:29:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:57.321 16:29:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:57.321 16:29:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:57.321 16:29:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:57.321 16:29:28 -- accel/accel.sh@41 -- # local IFS=, 00:11:57.321 16:29:28 -- accel/accel.sh@42 -- # jq -r . 00:11:57.321 ************************************ 00:11:57.321 START TEST accel_dif_functional_tests 00:11:57.321 ************************************ 00:11:57.321 16:29:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:57.321 [2024-07-13 16:29:28.719183] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:57.321 [2024-07-13 16:29:28.719691] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119281 ] 00:11:57.580 [2024-07-13 16:29:28.883840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:57.580 [2024-07-13 16:29:28.956766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.580 [2024-07-13 16:29:28.956912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.580 [2024-07-13 16:29:28.956914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.838 00:11:57.838 00:11:57.838 CUnit - A unit testing framework for C - Version 2.1-3 00:11:57.838 http://cunit.sourceforge.net/ 00:11:57.838 00:11:57.838 00:11:57.838 Suite: accel_dif 00:11:57.838 Test: verify: DIF generated, GUARD check ...passed 00:11:57.838 Test: verify: DIF generated, APPTAG check ...passed 00:11:57.838 Test: verify: DIF generated, REFTAG check ...passed 00:11:57.838 Test: verify: DIF not generated, GUARD check ...[2024-07-13 16:29:29.080368] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:57.838 passed[2024-07-13 16:29:29.080609] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:57.838 00:11:57.838 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 16:29:29.080831] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:57.838 [2024-07-13 16:29:29.081027] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:57.838 passed 00:11:57.838 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 16:29:29.081219] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:57.838 [2024-07-13 16:29:29.081522] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:57.838 passed 00:11:57.838 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:57.838 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 16:29:29.081839] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:57.838 passed 00:11:57.838 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:57.838 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:57.838 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:57.838 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 16:29:29.082676] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:57.838 passed 00:11:57.838 Test: generate copy: DIF generated, GUARD check ...passed 00:11:57.838 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:57.838 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:57.838 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:57.838 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:57.838 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:57.838 Test: generate copy: iovecs-len validate ...[2024-07-13 16:29:29.084062] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:57.838 passed 00:11:57.839 Test: generate copy: buffer alignment validate ...passed 00:11:57.839 00:11:57.839 Run Summary: Type Total Ran Passed Failed Inactive 00:11:57.839 suites 1 1 n/a 0 0 00:11:57.839 tests 20 20 20 0 0 00:11:57.839 asserts 204 204 204 0 n/a 00:11:57.839 00:11:57.839 Elapsed time = 0.017 seconds 00:11:58.096 00:11:58.096 real 0m0.840s 00:11:58.096 user 0m1.124s 00:11:58.096 sys 0m0.279s 00:11:58.096 16:29:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.096 16:29:29 -- common/autotest_common.sh@10 -- # set +x 00:11:58.096 ************************************ 00:11:58.096 END TEST accel_dif_functional_tests 00:11:58.096 ************************************ 00:11:58.096 ************************************ 00:11:58.096 END TEST accel 00:11:58.096 ************************************ 00:11:58.096 00:11:58.096 real 1m13.222s 00:11:58.096 user 1m14.625s 00:11:58.096 sys 0m10.752s 00:11:58.096 16:29:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.096 16:29:29 -- common/autotest_common.sh@10 -- # set +x 00:11:58.096 16:29:29 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:58.096 16:29:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:58.096 16:29:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:58.096 16:29:29 -- common/autotest_common.sh@10 -- # set +x 00:11:58.096 ************************************ 00:11:58.097 START TEST accel_rpc 00:11:58.097 ************************************ 00:11:58.097 16:29:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:58.355 * Looking for test storage... 00:11:58.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:58.355 16:29:29 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:58.355 16:29:29 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=119359 00:11:58.355 16:29:29 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:58.355 16:29:29 -- accel/accel_rpc.sh@15 -- # waitforlisten 119359 00:11:58.355 16:29:29 -- common/autotest_common.sh@819 -- # '[' -z 119359 ']' 00:11:58.355 16:29:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.355 16:29:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:58.355 16:29:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.355 16:29:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:58.355 16:29:29 -- common/autotest_common.sh@10 -- # set +x 00:11:58.355 [2024-07-13 16:29:29.724298] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:58.355 [2024-07-13 16:29:29.724714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119359 ] 00:11:58.613 [2024-07-13 16:29:29.867779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.613 [2024-07-13 16:29:29.947506] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:58.613 [2024-07-13 16:29:29.947954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.548 16:29:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:59.548 16:29:30 -- common/autotest_common.sh@852 -- # return 0 00:11:59.548 16:29:30 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:59.548 16:29:30 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:59.548 16:29:30 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:59.548 16:29:30 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:59.548 16:29:30 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:59.548 16:29:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:59.548 16:29:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:59.548 16:29:30 -- common/autotest_common.sh@10 -- # set +x 00:11:59.548 ************************************ 00:11:59.548 START TEST accel_assign_opcode 00:11:59.548 ************************************ 00:11:59.548 16:29:30 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:11:59.549 16:29:30 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:59.549 16:29:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:59.549 16:29:30 -- common/autotest_common.sh@10 -- # set +x 00:11:59.549 [2024-07-13 16:29:30.736945] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:59.549 16:29:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:59.549 16:29:30 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:59.549 16:29:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:59.549 16:29:30 -- common/autotest_common.sh@10 -- # set +x 00:11:59.549 [2024-07-13 16:29:30.744915] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:59.549 16:29:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:59.549 16:29:30 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:59.549 16:29:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:59.549 16:29:30 -- common/autotest_common.sh@10 -- # set +x 00:11:59.808 16:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:59.808 16:29:31 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:59.808 16:29:31 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:59.808 16:29:31 -- accel/accel_rpc.sh@42 -- # grep software 00:11:59.808 16:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:59.808 16:29:31 -- common/autotest_common.sh@10 -- # set +x 00:11:59.808 16:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:59.808 software 00:11:59.808 00:11:59.808 real 0m0.384s 00:11:59.808 user 0m0.051s 00:11:59.808 sys 0m0.013s 00:11:59.808 16:29:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.808 16:29:31 -- common/autotest_common.sh@10 -- # set +x 00:11:59.808 ************************************ 00:11:59.808 END TEST accel_assign_opcode 00:11:59.808 ************************************ 00:11:59.808 16:29:31 -- accel/accel_rpc.sh@55 -- # killprocess 119359 00:11:59.808 16:29:31 -- common/autotest_common.sh@926 -- # '[' -z 119359 ']' 00:11:59.808 16:29:31 -- common/autotest_common.sh@930 -- # kill -0 119359 00:11:59.808 16:29:31 -- common/autotest_common.sh@931 -- # uname 00:11:59.808 16:29:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:59.808 16:29:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119359 00:11:59.808 16:29:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:59.808 16:29:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:59.808 16:29:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119359' 00:11:59.808 killing process with pid 119359 00:11:59.808 16:29:31 -- common/autotest_common.sh@945 -- # kill 119359 00:11:59.808 16:29:31 -- common/autotest_common.sh@950 -- # wait 119359 00:12:00.742 ************************************ 00:12:00.743 END TEST accel_rpc 00:12:00.743 ************************************ 00:12:00.743 00:12:00.743 real 0m2.309s 00:12:00.743 user 0m2.239s 00:12:00.743 sys 0m0.678s 00:12:00.743 16:29:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.743 16:29:31 -- common/autotest_common.sh@10 -- # set +x 00:12:00.743 16:29:31 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:00.743 16:29:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:00.743 16:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:00.743 16:29:31 -- common/autotest_common.sh@10 -- # set +x 00:12:00.743 ************************************ 00:12:00.743 START TEST app_cmdline 00:12:00.743 ************************************ 00:12:00.743 16:29:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:00.743 * Looking for test storage... 00:12:00.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:00.743 16:29:32 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:00.743 16:29:32 -- app/cmdline.sh@17 -- # spdk_tgt_pid=119467 00:12:00.743 16:29:32 -- app/cmdline.sh@18 -- # waitforlisten 119467 00:12:00.743 16:29:32 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:00.743 16:29:32 -- common/autotest_common.sh@819 -- # '[' -z 119467 ']' 00:12:00.743 16:29:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.743 16:29:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:00.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.743 16:29:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.743 16:29:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:00.743 16:29:32 -- common/autotest_common.sh@10 -- # set +x 00:12:00.743 [2024-07-13 16:29:32.123810] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:00.743 [2024-07-13 16:29:32.124356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119467 ] 00:12:01.002 [2024-07-13 16:29:32.279750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.002 [2024-07-13 16:29:32.351670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:01.002 [2024-07-13 16:29:32.352148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.936 16:29:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:01.936 16:29:33 -- common/autotest_common.sh@852 -- # return 0 00:12:01.936 16:29:33 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:01.936 { 00:12:01.936 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:12:01.936 "fields": { 00:12:01.936 "major": 24, 00:12:01.936 "minor": 1, 00:12:01.936 "patch": 1, 00:12:01.936 "suffix": "-pre", 00:12:01.936 "commit": "4b94202c6" 00:12:01.936 } 00:12:01.936 } 00:12:01.936 16:29:33 -- app/cmdline.sh@22 -- # expected_methods=() 00:12:01.936 16:29:33 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:01.936 16:29:33 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:01.936 16:29:33 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:01.936 16:29:33 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:01.936 16:29:33 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:01.936 16:29:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.936 16:29:33 -- app/cmdline.sh@26 -- # sort 00:12:01.936 16:29:33 -- common/autotest_common.sh@10 -- # set +x 00:12:01.936 16:29:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.936 16:29:33 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:01.936 16:29:33 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:01.936 16:29:33 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:01.936 16:29:33 -- common/autotest_common.sh@640 -- # local es=0 00:12:01.936 16:29:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:01.936 16:29:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.936 16:29:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:01.936 16:29:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.936 16:29:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:01.936 16:29:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.936 16:29:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:01.936 16:29:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.936 16:29:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:01.936 16:29:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:02.195 request: 00:12:02.195 { 00:12:02.195 "method": "env_dpdk_get_mem_stats", 00:12:02.195 "req_id": 1 00:12:02.195 } 00:12:02.195 Got JSON-RPC error response 00:12:02.195 response: 00:12:02.195 { 00:12:02.195 "code": -32601, 00:12:02.195 "message": "Method not found" 00:12:02.195 } 00:12:02.195 16:29:33 -- common/autotest_common.sh@643 -- # es=1 00:12:02.195 16:29:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:02.195 16:29:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:02.195 16:29:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:02.195 16:29:33 -- app/cmdline.sh@1 -- # killprocess 119467 00:12:02.195 16:29:33 -- common/autotest_common.sh@926 -- # '[' -z 119467 ']' 00:12:02.195 16:29:33 -- common/autotest_common.sh@930 -- # kill -0 119467 00:12:02.195 16:29:33 -- common/autotest_common.sh@931 -- # uname 00:12:02.195 16:29:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:02.195 16:29:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119467 00:12:02.195 16:29:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:02.195 16:29:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:02.195 16:29:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119467' 00:12:02.195 killing process with pid 119467 00:12:02.195 16:29:33 -- common/autotest_common.sh@945 -- # kill 119467 00:12:02.195 16:29:33 -- common/autotest_common.sh@950 -- # wait 119467 00:12:03.130 00:12:03.130 real 0m2.323s 00:12:03.130 user 0m2.538s 00:12:03.130 sys 0m0.709s 00:12:03.130 16:29:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.130 16:29:34 -- common/autotest_common.sh@10 -- # set +x 00:12:03.130 ************************************ 00:12:03.130 END TEST app_cmdline 00:12:03.130 ************************************ 00:12:03.130 16:29:34 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:03.130 16:29:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:03.130 16:29:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:03.130 16:29:34 -- common/autotest_common.sh@10 -- # set +x 00:12:03.130 ************************************ 00:12:03.130 START TEST version 00:12:03.130 ************************************ 00:12:03.130 16:29:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:03.130 * Looking for test storage... 00:12:03.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:03.130 16:29:34 -- app/version.sh@17 -- # get_header_version major 00:12:03.130 16:29:34 -- app/version.sh@14 -- # cut -f2 00:12:03.130 16:29:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:03.130 16:29:34 -- app/version.sh@14 -- # tr -d '"' 00:12:03.130 16:29:34 -- app/version.sh@17 -- # major=24 00:12:03.130 16:29:34 -- app/version.sh@18 -- # get_header_version minor 00:12:03.130 16:29:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:03.130 16:29:34 -- app/version.sh@14 -- # tr -d '"' 00:12:03.130 16:29:34 -- app/version.sh@14 -- # cut -f2 00:12:03.130 16:29:34 -- app/version.sh@18 -- # minor=1 00:12:03.130 16:29:34 -- app/version.sh@19 -- # get_header_version patch 00:12:03.130 16:29:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:03.130 16:29:34 -- app/version.sh@14 -- # cut -f2 00:12:03.131 16:29:34 -- app/version.sh@14 -- # tr -d '"' 00:12:03.131 16:29:34 -- app/version.sh@19 -- # patch=1 00:12:03.131 16:29:34 -- app/version.sh@20 -- # get_header_version suffix 00:12:03.131 16:29:34 -- app/version.sh@14 -- # cut -f2 00:12:03.131 16:29:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:03.131 16:29:34 -- app/version.sh@14 -- # tr -d '"' 00:12:03.131 16:29:34 -- app/version.sh@20 -- # suffix=-pre 00:12:03.131 16:29:34 -- app/version.sh@22 -- # version=24.1 00:12:03.131 16:29:34 -- app/version.sh@25 -- # (( patch != 0 )) 00:12:03.131 16:29:34 -- app/version.sh@25 -- # version=24.1.1 00:12:03.131 16:29:34 -- app/version.sh@28 -- # version=24.1.1rc0 00:12:03.131 16:29:34 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:03.131 16:29:34 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:03.131 16:29:34 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:12:03.131 16:29:34 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:12:03.131 00:12:03.131 real 0m0.167s 00:12:03.131 user 0m0.095s 00:12:03.131 sys 0m0.119s 00:12:03.131 ************************************ 00:12:03.131 END TEST version 00:12:03.131 ************************************ 00:12:03.131 16:29:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.131 16:29:34 -- common/autotest_common.sh@10 -- # set +x 00:12:03.131 16:29:34 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:12:03.131 16:29:34 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:03.131 16:29:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:03.131 16:29:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:03.131 16:29:34 -- common/autotest_common.sh@10 -- # set +x 00:12:03.131 ************************************ 00:12:03.131 START TEST blockdev_general 00:12:03.131 ************************************ 00:12:03.131 16:29:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:03.389 * Looking for test storage... 00:12:03.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:03.389 16:29:34 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:03.389 16:29:34 -- bdev/nbd_common.sh@6 -- # set -e 00:12:03.389 16:29:34 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:03.389 16:29:34 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:03.389 16:29:34 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:03.389 16:29:34 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:03.389 16:29:34 -- bdev/blockdev.sh@18 -- # : 00:12:03.389 16:29:34 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:03.389 16:29:34 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:03.389 16:29:34 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:03.389 16:29:34 -- bdev/blockdev.sh@672 -- # uname -s 00:12:03.389 16:29:34 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:03.389 16:29:34 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:03.389 16:29:34 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:12:03.389 16:29:34 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:03.389 16:29:34 -- bdev/blockdev.sh@682 -- # dek= 00:12:03.389 16:29:34 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:03.389 16:29:34 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:03.389 16:29:34 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:03.389 16:29:34 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:12:03.389 16:29:34 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:12:03.389 16:29:34 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:03.389 16:29:34 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=119631 00:12:03.389 16:29:34 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:03.389 16:29:34 -- bdev/blockdev.sh@47 -- # waitforlisten 119631 00:12:03.389 16:29:34 -- common/autotest_common.sh@819 -- # '[' -z 119631 ']' 00:12:03.389 16:29:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.389 16:29:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:03.389 16:29:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.389 16:29:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:03.389 16:29:34 -- common/autotest_common.sh@10 -- # set +x 00:12:03.389 16:29:34 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:03.389 [2024-07-13 16:29:34.755049] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:03.389 [2024-07-13 16:29:34.755891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119631 ] 00:12:03.647 [2024-07-13 16:29:34.908069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.647 [2024-07-13 16:29:34.985799] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:03.647 [2024-07-13 16:29:34.986181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.580 16:29:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:04.580 16:29:35 -- common/autotest_common.sh@852 -- # return 0 00:12:04.580 16:29:35 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:04.580 16:29:35 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:12:04.580 16:29:35 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:12:04.580 16:29:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:04.580 16:29:35 -- common/autotest_common.sh@10 -- # set +x 00:12:04.580 [2024-07-13 16:29:36.047782] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:04.580 [2024-07-13 16:29:36.048143] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:04.839 00:12:04.839 [2024-07-13 16:29:36.055735] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:04.839 [2024-07-13 16:29:36.055882] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:04.839 00:12:04.839 Malloc0 00:12:04.839 Malloc1 00:12:04.839 Malloc2 00:12:04.839 Malloc3 00:12:04.839 Malloc4 00:12:04.839 Malloc5 00:12:04.839 Malloc6 00:12:04.839 Malloc7 00:12:04.839 Malloc8 00:12:04.839 Malloc9 00:12:04.839 [2024-07-13 16:29:36.279584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:04.839 [2024-07-13 16:29:36.279817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.839 [2024-07-13 16:29:36.279904] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:04.839 [2024-07-13 16:29:36.280045] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.839 [2024-07-13 16:29:36.283253] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.839 [2024-07-13 16:29:36.283413] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:04.839 TestPT 00:12:05.097 16:29:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:05.097 16:29:36 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:05.097 5000+0 records in 00:12:05.097 5000+0 records out 00:12:05.097 10240000 bytes (10 MB, 9.8 MiB) copied, 0.029023 s, 353 MB/s 00:12:05.097 16:29:36 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:05.097 16:29:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:05.097 16:29:36 -- common/autotest_common.sh@10 -- # set +x 00:12:05.097 AIO0 00:12:05.097 16:29:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:05.097 16:29:36 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:05.097 16:29:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:05.097 16:29:36 -- common/autotest_common.sh@10 -- # set +x 00:12:05.097 16:29:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:05.097 16:29:36 -- bdev/blockdev.sh@738 -- # cat 00:12:05.097 16:29:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:05.097 16:29:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:05.097 16:29:36 -- common/autotest_common.sh@10 -- # set +x 00:12:05.097 16:29:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:05.097 16:29:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:05.097 16:29:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:05.097 16:29:36 -- common/autotest_common.sh@10 -- # set +x 00:12:05.097 16:29:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:05.097 16:29:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:05.097 16:29:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:05.097 16:29:36 -- common/autotest_common.sh@10 -- # set +x 00:12:05.097 16:29:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:05.097 16:29:36 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:05.097 16:29:36 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:05.097 16:29:36 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:05.097 16:29:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:05.097 16:29:36 -- common/autotest_common.sh@10 -- # set +x 00:12:05.356 16:29:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:05.356 16:29:36 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:05.356 16:29:36 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:05.357 16:29:36 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "fcf338be-7e14-491d-8a54-bc7aec556e2f"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fcf338be-7e14-491d-8a54-bc7aec556e2f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "99e94d71-015d-5b85-9bc1-2bf35bf81143"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "99e94d71-015d-5b85-9bc1-2bf35bf81143",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "3ad384c5-fd74-52bf-b05e-77d66546cc6e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3ad384c5-fd74-52bf-b05e-77d66546cc6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "ec9ec735-c2a7-5e69-9200-08fa49e16821"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ec9ec735-c2a7-5e69-9200-08fa49e16821",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "614ec0c3-3313-534f-813e-9cfd240bd70a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "614ec0c3-3313-534f-813e-9cfd240bd70a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "81b531af-3946-54b2-9091-9cc59441039b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "81b531af-3946-54b2-9091-9cc59441039b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "72c30180-9dae-54a1-a1cc-a48ba9970d2e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "72c30180-9dae-54a1-a1cc-a48ba9970d2e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cad12f55-2337-5b33-96c5-aee5605882f2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cad12f55-2337-5b33-96c5-aee5605882f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "6decb2d6-c69a-5533-a639-75b064ec36c7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6decb2d6-c69a-5533-a639-75b064ec36c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "807a2c63-2349-5f11-9e6b-e17532a6259d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "807a2c63-2349-5f11-9e6b-e17532a6259d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "8a7abe1d-a517-5740-9aa3-edd0a119568f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8a7abe1d-a517-5740-9aa3-edd0a119568f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "7431707f-80b0-5bea-b7f4-1bd557cb8ea5"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "7431707f-80b0-5bea-b7f4-1bd557cb8ea5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "dcf368c1-c4fd-4ff4-b396-aa7a044d6670"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dcf368c1-c4fd-4ff4-b396-aa7a044d6670",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "dcf368c1-c4fd-4ff4-b396-aa7a044d6670",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "ebab76cf-6be8-4272-8fa3-6125006d53a4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "63627c1b-1944-49f7-97b8-eaadda54d413",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "68b6f2b0-0d7f-4ba7-9d19-7dd7198528e0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "68b6f2b0-0d7f-4ba7-9d19-7dd7198528e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "68b6f2b0-0d7f-4ba7-9d19-7dd7198528e0",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "3ae902fe-30d8-49b4-9c62-7f4b4612992f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "1ef59058-fc5d-4b21-a595-2e8764525835",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d4e9636d-bb3c-4e95-80a8-ce82ec2a6751"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d4e9636d-bb3c-4e95-80a8-ce82ec2a6751",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d4e9636d-bb3c-4e95-80a8-ce82ec2a6751",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "411caae0-e161-4b85-93af-bc1596f5cc97",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "d7629de6-eb7b-4fe9-839b-70d0f4012984",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "ec010262-a5e0-42e2-8096-85e914f96470"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "ec010262-a5e0-42e2-8096-85e914f96470",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:05.357 16:29:36 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:05.357 16:29:36 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:12:05.357 16:29:36 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:05.357 16:29:36 -- bdev/blockdev.sh@752 -- # killprocess 119631 00:12:05.357 16:29:36 -- common/autotest_common.sh@926 -- # '[' -z 119631 ']' 00:12:05.357 16:29:36 -- common/autotest_common.sh@930 -- # kill -0 119631 00:12:05.357 16:29:36 -- common/autotest_common.sh@931 -- # uname 00:12:05.357 16:29:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:05.357 16:29:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119631 00:12:05.357 16:29:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:05.357 16:29:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:05.357 16:29:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119631' 00:12:05.357 killing process with pid 119631 00:12:05.357 16:29:36 -- common/autotest_common.sh@945 -- # kill 119631 00:12:05.357 16:29:36 -- common/autotest_common.sh@950 -- # wait 119631 00:12:06.293 16:29:37 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:06.293 16:29:37 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:06.293 16:29:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:12:06.293 16:29:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:06.293 16:29:37 -- common/autotest_common.sh@10 -- # set +x 00:12:06.293 ************************************ 00:12:06.293 START TEST bdev_hello_world 00:12:06.293 ************************************ 00:12:06.293 16:29:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:06.293 [2024-07-13 16:29:37.675331] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:06.293 [2024-07-13 16:29:37.675857] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119697 ] 00:12:06.551 [2024-07-13 16:29:37.832440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.551 [2024-07-13 16:29:37.905967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.809 [2024-07-13 16:29:38.086136] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:06.809 [2024-07-13 16:29:38.086539] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:06.809 [2024-07-13 16:29:38.094045] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:06.809 [2024-07-13 16:29:38.094229] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:06.809 [2024-07-13 16:29:38.102092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:06.809 [2024-07-13 16:29:38.102277] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:06.809 [2024-07-13 16:29:38.102414] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:06.809 [2024-07-13 16:29:38.215079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:06.809 [2024-07-13 16:29:38.215472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.809 [2024-07-13 16:29:38.215578] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:06.809 [2024-07-13 16:29:38.215814] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.809 [2024-07-13 16:29:38.218940] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.809 [2024-07-13 16:29:38.219105] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:07.067 [2024-07-13 16:29:38.422461] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:07.067 [2024-07-13 16:29:38.422854] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:07.067 [2024-07-13 16:29:38.423139] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:07.067 [2024-07-13 16:29:38.423317] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:07.067 [2024-07-13 16:29:38.423530] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:07.067 [2024-07-13 16:29:38.423830] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:07.067 [2024-07-13 16:29:38.424042] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:07.067 00:12:07.067 [2024-07-13 16:29:38.424161] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:07.671 00:12:07.671 real 0m1.439s 00:12:07.671 user 0m0.832s 00:12:07.671 sys 0m0.453s 00:12:07.671 ************************************ 00:12:07.671 END TEST bdev_hello_world 00:12:07.671 ************************************ 00:12:07.671 16:29:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.671 16:29:39 -- common/autotest_common.sh@10 -- # set +x 00:12:07.671 16:29:39 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:07.671 16:29:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:07.671 16:29:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:07.671 16:29:39 -- common/autotest_common.sh@10 -- # set +x 00:12:07.671 ************************************ 00:12:07.671 START TEST bdev_bounds 00:12:07.671 ************************************ 00:12:07.671 16:29:39 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:12:07.671 16:29:39 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:07.671 16:29:39 -- bdev/blockdev.sh@288 -- # bdevio_pid=119735 00:12:07.671 16:29:39 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:07.671 16:29:39 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 119735' 00:12:07.671 Process bdevio pid: 119735 00:12:07.671 16:29:39 -- bdev/blockdev.sh@291 -- # waitforlisten 119735 00:12:07.671 16:29:39 -- common/autotest_common.sh@819 -- # '[' -z 119735 ']' 00:12:07.671 16:29:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.671 16:29:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:07.671 16:29:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.671 16:29:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:07.671 16:29:39 -- common/autotest_common.sh@10 -- # set +x 00:12:07.929 [2024-07-13 16:29:39.168167] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:07.929 [2024-07-13 16:29:39.168599] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119735 ] 00:12:07.929 [2024-07-13 16:29:39.324156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:08.187 [2024-07-13 16:29:39.401832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.187 [2024-07-13 16:29:39.402016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.187 [2024-07-13 16:29:39.402016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.187 [2024-07-13 16:29:39.584594] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:08.187 [2024-07-13 16:29:39.584956] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:08.187 [2024-07-13 16:29:39.592501] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:08.187 [2024-07-13 16:29:39.592669] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:08.187 [2024-07-13 16:29:39.600576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:08.187 [2024-07-13 16:29:39.600750] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:08.187 [2024-07-13 16:29:39.600879] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:08.447 [2024-07-13 16:29:39.715491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:08.447 [2024-07-13 16:29:39.715837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.447 [2024-07-13 16:29:39.715959] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:08.447 [2024-07-13 16:29:39.716075] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.447 [2024-07-13 16:29:39.719171] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.447 [2024-07-13 16:29:39.719326] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:08.705 16:29:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:08.705 16:29:40 -- common/autotest_common.sh@852 -- # return 0 00:12:08.705 16:29:40 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:08.705 I/O targets: 00:12:08.705 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:08.705 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:08.705 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:08.705 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:08.705 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:08.705 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:08.705 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:08.705 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:08.705 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:08.706 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:08.706 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:08.706 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:08.706 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:08.706 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:08.706 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:08.706 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:08.706 00:12:08.706 00:12:08.706 CUnit - A unit testing framework for C - Version 2.1-3 00:12:08.706 http://cunit.sourceforge.net/ 00:12:08.706 00:12:08.706 00:12:08.706 Suite: bdevio tests on: AIO0 00:12:08.706 Test: blockdev write read block ...passed 00:12:08.706 Test: blockdev write zeroes read block ...passed 00:12:08.965 Test: blockdev write zeroes read no split ...passed 00:12:08.965 Test: blockdev write zeroes read split ...passed 00:12:08.965 Test: blockdev write zeroes read split partial ...passed 00:12:08.965 Test: blockdev reset ...passed 00:12:08.965 Test: blockdev write read 8 blocks ...passed 00:12:08.965 Test: blockdev write read size > 128k ...passed 00:12:08.965 Test: blockdev write read invalid size ...passed 00:12:08.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.965 Test: blockdev write read max offset ...passed 00:12:08.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.965 Test: blockdev writev readv 8 blocks ...passed 00:12:08.965 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.965 Test: blockdev writev readv block ...passed 00:12:08.965 Test: blockdev writev readv size > 128k ...passed 00:12:08.965 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.965 Test: blockdev comparev and writev ...passed 00:12:08.965 Test: blockdev nvme passthru rw ...passed 00:12:08.965 Test: blockdev nvme passthru vendor specific ...passed 00:12:08.965 Test: blockdev nvme admin passthru ...passed 00:12:08.965 Test: blockdev copy ...passed 00:12:08.965 Suite: bdevio tests on: raid1 00:12:08.965 Test: blockdev write read block ...passed 00:12:08.965 Test: blockdev write zeroes read block ...passed 00:12:08.965 Test: blockdev write zeroes read no split ...passed 00:12:08.965 Test: blockdev write zeroes read split ...passed 00:12:08.965 Test: blockdev write zeroes read split partial ...passed 00:12:08.965 Test: blockdev reset ...passed 00:12:08.965 Test: blockdev write read 8 blocks ...passed 00:12:08.965 Test: blockdev write read size > 128k ...passed 00:12:08.965 Test: blockdev write read invalid size ...passed 00:12:08.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.965 Test: blockdev write read max offset ...passed 00:12:08.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.965 Test: blockdev writev readv 8 blocks ...passed 00:12:08.965 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.965 Test: blockdev writev readv block ...passed 00:12:08.965 Test: blockdev writev readv size > 128k ...passed 00:12:08.965 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.965 Test: blockdev comparev and writev ...passed 00:12:08.965 Test: blockdev nvme passthru rw ...passed 00:12:08.965 Test: blockdev nvme passthru vendor specific ...passed 00:12:08.965 Test: blockdev nvme admin passthru ...passed 00:12:08.965 Test: blockdev copy ...passed 00:12:08.965 Suite: bdevio tests on: concat0 00:12:08.965 Test: blockdev write read block ...passed 00:12:08.965 Test: blockdev write zeroes read block ...passed 00:12:08.965 Test: blockdev write zeroes read no split ...passed 00:12:08.965 Test: blockdev write zeroes read split ...passed 00:12:08.965 Test: blockdev write zeroes read split partial ...passed 00:12:08.965 Test: blockdev reset ...passed 00:12:08.965 Test: blockdev write read 8 blocks ...passed 00:12:08.965 Test: blockdev write read size > 128k ...passed 00:12:08.965 Test: blockdev write read invalid size ...passed 00:12:08.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.965 Test: blockdev write read max offset ...passed 00:12:08.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.965 Test: blockdev writev readv 8 blocks ...passed 00:12:08.965 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.965 Test: blockdev writev readv block ...passed 00:12:08.965 Test: blockdev writev readv size > 128k ...passed 00:12:08.965 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.965 Test: blockdev comparev and writev ...passed 00:12:08.965 Test: blockdev nvme passthru rw ...passed 00:12:08.965 Test: blockdev nvme passthru vendor specific ...passed 00:12:08.965 Test: blockdev nvme admin passthru ...passed 00:12:08.965 Test: blockdev copy ...passed 00:12:08.965 Suite: bdevio tests on: raid0 00:12:08.965 Test: blockdev write read block ...passed 00:12:08.965 Test: blockdev write zeroes read block ...passed 00:12:08.965 Test: blockdev write zeroes read no split ...passed 00:12:08.965 Test: blockdev write zeroes read split ...passed 00:12:08.965 Test: blockdev write zeroes read split partial ...passed 00:12:08.965 Test: blockdev reset ...passed 00:12:08.965 Test: blockdev write read 8 blocks ...passed 00:12:08.965 Test: blockdev write read size > 128k ...passed 00:12:08.965 Test: blockdev write read invalid size ...passed 00:12:08.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.965 Test: blockdev write read max offset ...passed 00:12:08.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.965 Test: blockdev writev readv 8 blocks ...passed 00:12:08.965 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.965 Test: blockdev writev readv block ...passed 00:12:08.965 Test: blockdev writev readv size > 128k ...passed 00:12:08.965 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.965 Test: blockdev comparev and writev ...passed 00:12:08.965 Test: blockdev nvme passthru rw ...passed 00:12:08.965 Test: blockdev nvme passthru vendor specific ...passed 00:12:08.965 Test: blockdev nvme admin passthru ...passed 00:12:08.965 Test: blockdev copy ...passed 00:12:08.965 Suite: bdevio tests on: TestPT 00:12:08.965 Test: blockdev write read block ...passed 00:12:08.965 Test: blockdev write zeroes read block ...passed 00:12:08.965 Test: blockdev write zeroes read no split ...passed 00:12:08.965 Test: blockdev write zeroes read split ...passed 00:12:08.965 Test: blockdev write zeroes read split partial ...passed 00:12:08.965 Test: blockdev reset ...passed 00:12:08.965 Test: blockdev write read 8 blocks ...passed 00:12:08.965 Test: blockdev write read size > 128k ...passed 00:12:08.965 Test: blockdev write read invalid size ...passed 00:12:08.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.965 Test: blockdev write read max offset ...passed 00:12:08.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.965 Test: blockdev writev readv 8 blocks ...passed 00:12:08.965 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.965 Test: blockdev writev readv block ...passed 00:12:08.965 Test: blockdev writev readv size > 128k ...passed 00:12:08.965 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.965 Test: blockdev comparev and writev ...passed 00:12:08.965 Test: blockdev nvme passthru rw ...passed 00:12:08.965 Test: blockdev nvme passthru vendor specific ...passed 00:12:08.965 Test: blockdev nvme admin passthru ...passed 00:12:08.965 Test: blockdev copy ...passed 00:12:08.965 Suite: bdevio tests on: Malloc2p7 00:12:08.965 Test: blockdev write read block ...passed 00:12:08.965 Test: blockdev write zeroes read block ...passed 00:12:08.965 Test: blockdev write zeroes read no split ...passed 00:12:08.965 Test: blockdev write zeroes read split ...passed 00:12:08.965 Test: blockdev write zeroes read split partial ...passed 00:12:08.965 Test: blockdev reset ...passed 00:12:08.965 Test: blockdev write read 8 blocks ...passed 00:12:08.965 Test: blockdev write read size > 128k ...passed 00:12:08.965 Test: blockdev write read invalid size ...passed 00:12:08.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.965 Test: blockdev write read max offset ...passed 00:12:08.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.965 Test: blockdev writev readv 8 blocks ...passed 00:12:08.965 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.965 Test: blockdev writev readv block ...passed 00:12:08.965 Test: blockdev writev readv size > 128k ...passed 00:12:08.965 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.965 Test: blockdev comparev and writev ...passed 00:12:08.965 Test: blockdev nvme passthru rw ...passed 00:12:08.965 Test: blockdev nvme passthru vendor specific ...passed 00:12:08.965 Test: blockdev nvme admin passthru ...passed 00:12:08.965 Test: blockdev copy ...passed 00:12:08.965 Suite: bdevio tests on: Malloc2p6 00:12:08.965 Test: blockdev write read block ...passed 00:12:08.965 Test: blockdev write zeroes read block ...passed 00:12:08.965 Test: blockdev write zeroes read no split ...passed 00:12:08.965 Test: blockdev write zeroes read split ...passed 00:12:08.965 Test: blockdev write zeroes read split partial ...passed 00:12:08.965 Test: blockdev reset ...passed 00:12:08.965 Test: blockdev write read 8 blocks ...passed 00:12:08.965 Test: blockdev write read size > 128k ...passed 00:12:08.965 Test: blockdev write read invalid size ...passed 00:12:08.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.966 Test: blockdev write read max offset ...passed 00:12:08.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.966 Test: blockdev writev readv 8 blocks ...passed 00:12:08.966 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.966 Test: blockdev writev readv block ...passed 00:12:08.966 Test: blockdev writev readv size > 128k ...passed 00:12:08.966 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.966 Test: blockdev comparev and writev ...passed 00:12:08.966 Test: blockdev nvme passthru rw ...passed 00:12:08.966 Test: blockdev nvme passthru vendor specific ...passed 00:12:08.966 Test: blockdev nvme admin passthru ...passed 00:12:08.966 Test: blockdev copy ...passed 00:12:08.966 Suite: bdevio tests on: Malloc2p5 00:12:08.966 Test: blockdev write read block ...passed 00:12:08.966 Test: blockdev write zeroes read block ...passed 00:12:08.966 Test: blockdev write zeroes read no split ...passed 00:12:08.966 Test: blockdev write zeroes read split ...passed 00:12:08.966 Test: blockdev write zeroes read split partial ...passed 00:12:08.966 Test: blockdev reset ...passed 00:12:08.966 Test: blockdev write read 8 blocks ...passed 00:12:08.966 Test: blockdev write read size > 128k ...passed 00:12:08.966 Test: blockdev write read invalid size ...passed 00:12:08.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.966 Test: blockdev write read max offset ...passed 00:12:08.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.966 Test: blockdev writev readv 8 blocks ...passed 00:12:08.966 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.966 Test: blockdev writev readv block ...passed 00:12:08.966 Test: blockdev writev readv size > 128k ...passed 00:12:08.966 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.966 Test: blockdev comparev and writev ...passed 00:12:08.966 Test: blockdev nvme passthru rw ...passed 00:12:08.966 Test: blockdev nvme passthru vendor specific ...passed 00:12:08.966 Test: blockdev nvme admin passthru ...passed 00:12:08.966 Test: blockdev copy ...passed 00:12:08.966 Suite: bdevio tests on: Malloc2p4 00:12:08.966 Test: blockdev write read block ...passed 00:12:08.966 Test: blockdev write zeroes read block ...passed 00:12:08.966 Test: blockdev write zeroes read no split ...passed 00:12:08.966 Test: blockdev write zeroes read split ...passed 00:12:08.966 Test: blockdev write zeroes read split partial ...passed 00:12:08.966 Test: blockdev reset ...passed 00:12:08.966 Test: blockdev write read 8 blocks ...passed 00:12:08.966 Test: blockdev write read size > 128k ...passed 00:12:08.966 Test: blockdev write read invalid size ...passed 00:12:08.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.966 Test: blockdev write read max offset ...passed 00:12:08.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.966 Test: blockdev writev readv 8 blocks ...passed 00:12:08.966 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.966 Test: blockdev writev readv block ...passed 00:12:08.966 Test: blockdev writev readv size > 128k ...passed 00:12:08.966 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.966 Test: blockdev comparev and writev ...passed 00:12:08.966 Test: blockdev nvme passthru rw ...passed 00:12:08.966 Test: blockdev nvme passthru vendor specific ...passed 00:12:08.966 Test: blockdev nvme admin passthru ...passed 00:12:08.966 Test: blockdev copy ...passed 00:12:08.966 Suite: bdevio tests on: Malloc2p3 00:12:08.966 Test: blockdev write read block ...passed 00:12:08.966 Test: blockdev write zeroes read block ...passed 00:12:08.966 Test: blockdev write zeroes read no split ...passed 00:12:08.966 Test: blockdev write zeroes read split ...passed 00:12:08.966 Test: blockdev write zeroes read split partial ...passed 00:12:08.966 Test: blockdev reset ...passed 00:12:08.966 Test: blockdev write read 8 blocks ...passed 00:12:08.966 Test: blockdev write read size > 128k ...passed 00:12:08.966 Test: blockdev write read invalid size ...passed 00:12:08.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.966 Test: blockdev write read max offset ...passed 00:12:08.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.966 Test: blockdev writev readv 8 blocks ...passed 00:12:08.966 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.966 Test: blockdev writev readv block ...passed 00:12:08.966 Test: blockdev writev readv size > 128k ...passed 00:12:08.966 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.966 Test: blockdev comparev and writev ...passed 00:12:08.966 Test: blockdev nvme passthru rw ...passed 00:12:08.966 Test: blockdev nvme passthru vendor specific ...passed 00:12:08.966 Test: blockdev nvme admin passthru ...passed 00:12:08.966 Test: blockdev copy ...passed 00:12:08.966 Suite: bdevio tests on: Malloc2p2 00:12:08.966 Test: blockdev write read block ...passed 00:12:08.966 Test: blockdev write zeroes read block ...passed 00:12:08.966 Test: blockdev write zeroes read no split ...passed 00:12:08.966 Test: blockdev write zeroes read split ...passed 00:12:08.966 Test: blockdev write zeroes read split partial ...passed 00:12:08.966 Test: blockdev reset ...passed 00:12:08.966 Test: blockdev write read 8 blocks ...passed 00:12:08.966 Test: blockdev write read size > 128k ...passed 00:12:08.966 Test: blockdev write read invalid size ...passed 00:12:08.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.966 Test: blockdev write read max offset ...passed 00:12:08.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.966 Test: blockdev writev readv 8 blocks ...passed 00:12:08.966 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.966 Test: blockdev writev readv block ...passed 00:12:08.966 Test: blockdev writev readv size > 128k ...passed 00:12:08.966 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.966 Test: blockdev comparev and writev ...passed 00:12:08.966 Test: blockdev nvme passthru rw ...passed 00:12:08.966 Test: blockdev nvme passthru vendor specific ...passed 00:12:08.966 Test: blockdev nvme admin passthru ...passed 00:12:08.966 Test: blockdev copy ...passed 00:12:08.966 Suite: bdevio tests on: Malloc2p1 00:12:08.966 Test: blockdev write read block ...passed 00:12:08.966 Test: blockdev write zeroes read block ...passed 00:12:08.966 Test: blockdev write zeroes read no split ...passed 00:12:08.966 Test: blockdev write zeroes read split ...passed 00:12:08.966 Test: blockdev write zeroes read split partial ...passed 00:12:08.966 Test: blockdev reset ...passed 00:12:08.966 Test: blockdev write read 8 blocks ...passed 00:12:08.966 Test: blockdev write read size > 128k ...passed 00:12:08.966 Test: blockdev write read invalid size ...passed 00:12:08.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:08.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:08.966 Test: blockdev write read max offset ...passed 00:12:08.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:08.966 Test: blockdev writev readv 8 blocks ...passed 00:12:08.966 Test: blockdev writev readv 30 x 1block ...passed 00:12:08.966 Test: blockdev writev readv block ...passed 00:12:08.966 Test: blockdev writev readv size > 128k ...passed 00:12:08.966 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:08.966 Test: blockdev comparev and writev ...passed 00:12:08.966 Test: blockdev nvme passthru rw ...passed 00:12:08.966 Test: blockdev nvme passthru vendor specific ...passed 00:12:08.966 Test: blockdev nvme admin passthru ...passed 00:12:08.966 Test: blockdev copy ...passed 00:12:08.966 Suite: bdevio tests on: Malloc2p0 00:12:08.966 Test: blockdev write read block ...passed 00:12:08.966 Test: blockdev write zeroes read block ...passed 00:12:08.966 Test: blockdev write zeroes read no split ...passed 00:12:08.966 Test: blockdev write zeroes read split ...passed 00:12:09.225 Test: blockdev write zeroes read split partial ...passed 00:12:09.225 Test: blockdev reset ...passed 00:12:09.225 Test: blockdev write read 8 blocks ...passed 00:12:09.225 Test: blockdev write read size > 128k ...passed 00:12:09.225 Test: blockdev write read invalid size ...passed 00:12:09.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:09.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:09.225 Test: blockdev write read max offset ...passed 00:12:09.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:09.225 Test: blockdev writev readv 8 blocks ...passed 00:12:09.225 Test: blockdev writev readv 30 x 1block ...passed 00:12:09.225 Test: blockdev writev readv block ...passed 00:12:09.225 Test: blockdev writev readv size > 128k ...passed 00:12:09.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:09.225 Test: blockdev comparev and writev ...passed 00:12:09.225 Test: blockdev nvme passthru rw ...passed 00:12:09.225 Test: blockdev nvme passthru vendor specific ...passed 00:12:09.225 Test: blockdev nvme admin passthru ...passed 00:12:09.225 Test: blockdev copy ...passed 00:12:09.225 Suite: bdevio tests on: Malloc1p1 00:12:09.225 Test: blockdev write read block ...passed 00:12:09.225 Test: blockdev write zeroes read block ...passed 00:12:09.225 Test: blockdev write zeroes read no split ...passed 00:12:09.225 Test: blockdev write zeroes read split ...passed 00:12:09.225 Test: blockdev write zeroes read split partial ...passed 00:12:09.225 Test: blockdev reset ...passed 00:12:09.225 Test: blockdev write read 8 blocks ...passed 00:12:09.225 Test: blockdev write read size > 128k ...passed 00:12:09.225 Test: blockdev write read invalid size ...passed 00:12:09.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:09.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:09.225 Test: blockdev write read max offset ...passed 00:12:09.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:09.225 Test: blockdev writev readv 8 blocks ...passed 00:12:09.225 Test: blockdev writev readv 30 x 1block ...passed 00:12:09.225 Test: blockdev writev readv block ...passed 00:12:09.225 Test: blockdev writev readv size > 128k ...passed 00:12:09.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:09.225 Test: blockdev comparev and writev ...passed 00:12:09.225 Test: blockdev nvme passthru rw ...passed 00:12:09.225 Test: blockdev nvme passthru vendor specific ...passed 00:12:09.225 Test: blockdev nvme admin passthru ...passed 00:12:09.225 Test: blockdev copy ...passed 00:12:09.225 Suite: bdevio tests on: Malloc1p0 00:12:09.225 Test: blockdev write read block ...passed 00:12:09.225 Test: blockdev write zeroes read block ...passed 00:12:09.225 Test: blockdev write zeroes read no split ...passed 00:12:09.225 Test: blockdev write zeroes read split ...passed 00:12:09.225 Test: blockdev write zeroes read split partial ...passed 00:12:09.225 Test: blockdev reset ...passed 00:12:09.225 Test: blockdev write read 8 blocks ...passed 00:12:09.225 Test: blockdev write read size > 128k ...passed 00:12:09.225 Test: blockdev write read invalid size ...passed 00:12:09.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:09.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:09.226 Test: blockdev write read max offset ...passed 00:12:09.226 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:09.226 Test: blockdev writev readv 8 blocks ...passed 00:12:09.226 Test: blockdev writev readv 30 x 1block ...passed 00:12:09.226 Test: blockdev writev readv block ...passed 00:12:09.226 Test: blockdev writev readv size > 128k ...passed 00:12:09.226 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:09.226 Test: blockdev comparev and writev ...passed 00:12:09.226 Test: blockdev nvme passthru rw ...passed 00:12:09.226 Test: blockdev nvme passthru vendor specific ...passed 00:12:09.226 Test: blockdev nvme admin passthru ...passed 00:12:09.226 Test: blockdev copy ...passed 00:12:09.226 Suite: bdevio tests on: Malloc0 00:12:09.226 Test: blockdev write read block ...passed 00:12:09.226 Test: blockdev write zeroes read block ...passed 00:12:09.226 Test: blockdev write zeroes read no split ...passed 00:12:09.226 Test: blockdev write zeroes read split ...passed 00:12:09.226 Test: blockdev write zeroes read split partial ...passed 00:12:09.226 Test: blockdev reset ...passed 00:12:09.226 Test: blockdev write read 8 blocks ...passed 00:12:09.226 Test: blockdev write read size > 128k ...passed 00:12:09.226 Test: blockdev write read invalid size ...passed 00:12:09.226 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:09.226 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:09.226 Test: blockdev write read max offset ...passed 00:12:09.226 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:09.226 Test: blockdev writev readv 8 blocks ...passed 00:12:09.226 Test: blockdev writev readv 30 x 1block ...passed 00:12:09.226 Test: blockdev writev readv block ...passed 00:12:09.226 Test: blockdev writev readv size > 128k ...passed 00:12:09.226 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:09.226 Test: blockdev comparev and writev ...passed 00:12:09.226 Test: blockdev nvme passthru rw ...passed 00:12:09.226 Test: blockdev nvme passthru vendor specific ...passed 00:12:09.226 Test: blockdev nvme admin passthru ...passed 00:12:09.226 Test: blockdev copy ...passed 00:12:09.226 00:12:09.226 Run Summary: Type Total Ran Passed Failed Inactive 00:12:09.226 suites 16 16 n/a 0 0 00:12:09.226 tests 368 368 368 0 0 00:12:09.226 asserts 2224 2224 2224 0 n/a 00:12:09.226 00:12:09.226 Elapsed time = 0.683 seconds 00:12:09.226 0 00:12:09.226 16:29:40 -- bdev/blockdev.sh@293 -- # killprocess 119735 00:12:09.226 16:29:40 -- common/autotest_common.sh@926 -- # '[' -z 119735 ']' 00:12:09.226 16:29:40 -- common/autotest_common.sh@930 -- # kill -0 119735 00:12:09.226 16:29:40 -- common/autotest_common.sh@931 -- # uname 00:12:09.226 16:29:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:09.226 16:29:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119735 00:12:09.226 killing process with pid 119735 00:12:09.226 16:29:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:09.226 16:29:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:09.226 16:29:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119735' 00:12:09.226 16:29:40 -- common/autotest_common.sh@945 -- # kill 119735 00:12:09.226 16:29:40 -- common/autotest_common.sh@950 -- # wait 119735 00:12:09.793 ************************************ 00:12:09.793 END TEST bdev_bounds 00:12:09.793 ************************************ 00:12:09.793 16:29:41 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:09.793 00:12:09.793 real 0m2.029s 00:12:09.793 user 0m4.581s 00:12:09.793 sys 0m0.592s 00:12:09.793 16:29:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.793 16:29:41 -- common/autotest_common.sh@10 -- # set +x 00:12:09.793 16:29:41 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:09.793 16:29:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:09.793 16:29:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:09.793 16:29:41 -- common/autotest_common.sh@10 -- # set +x 00:12:09.793 ************************************ 00:12:09.793 START TEST bdev_nbd 00:12:09.793 ************************************ 00:12:09.793 16:29:41 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:09.793 16:29:41 -- bdev/blockdev.sh@298 -- # uname -s 00:12:09.793 16:29:41 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:09.793 16:29:41 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:09.793 16:29:41 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:09.793 16:29:41 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:09.793 16:29:41 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:09.793 16:29:41 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:12:09.793 16:29:41 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:09.793 16:29:41 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:09.793 16:29:41 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:09.793 16:29:41 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:12:09.793 16:29:41 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:09.793 16:29:41 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:09.793 16:29:41 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:09.793 16:29:41 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:09.793 16:29:41 -- bdev/blockdev.sh@316 -- # nbd_pid=119798 00:12:09.793 16:29:41 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:09.793 16:29:41 -- bdev/blockdev.sh@318 -- # waitforlisten 119798 /var/tmp/spdk-nbd.sock 00:12:09.793 16:29:41 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:09.793 16:29:41 -- common/autotest_common.sh@819 -- # '[' -z 119798 ']' 00:12:09.793 16:29:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:09.793 16:29:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:09.793 16:29:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:09.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:09.793 16:29:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:09.793 16:29:41 -- common/autotest_common.sh@10 -- # set +x 00:12:10.051 [2024-07-13 16:29:41.297431] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:10.051 [2024-07-13 16:29:41.298462] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.051 [2024-07-13 16:29:41.453941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.311 [2024-07-13 16:29:41.529956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.311 [2024-07-13 16:29:41.711714] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:10.311 [2024-07-13 16:29:41.712017] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:10.311 [2024-07-13 16:29:41.719627] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:10.311 [2024-07-13 16:29:41.719784] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:10.311 [2024-07-13 16:29:41.727671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:10.311 [2024-07-13 16:29:41.727847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:10.311 [2024-07-13 16:29:41.727951] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:10.570 [2024-07-13 16:29:41.841433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:10.570 [2024-07-13 16:29:41.841785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.570 [2024-07-13 16:29:41.841890] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:10.570 [2024-07-13 16:29:41.842001] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.570 [2024-07-13 16:29:41.845031] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.570 [2024-07-13 16:29:41.845197] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:10.828 16:29:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:10.828 16:29:42 -- common/autotest_common.sh@852 -- # return 0 00:12:10.828 16:29:42 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:10.828 16:29:42 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:10.828 16:29:42 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:10.828 16:29:42 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:10.828 16:29:42 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:10.828 16:29:42 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:10.828 16:29:42 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:10.828 16:29:42 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:10.828 16:29:42 -- bdev/nbd_common.sh@24 -- # local i 00:12:10.828 16:29:42 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:10.828 16:29:42 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:10.828 16:29:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:10.828 16:29:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:11.087 16:29:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:11.087 16:29:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:11.087 16:29:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:11.087 16:29:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:11.087 16:29:42 -- common/autotest_common.sh@857 -- # local i 00:12:11.087 16:29:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:11.087 16:29:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:11.087 16:29:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:11.087 16:29:42 -- common/autotest_common.sh@861 -- # break 00:12:11.087 16:29:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:11.087 16:29:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:11.087 16:29:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.087 1+0 records in 00:12:11.087 1+0 records out 00:12:11.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539532 s, 7.6 MB/s 00:12:11.087 16:29:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.087 16:29:42 -- common/autotest_common.sh@874 -- # size=4096 00:12:11.087 16:29:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.087 16:29:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:11.087 16:29:42 -- common/autotest_common.sh@877 -- # return 0 00:12:11.087 16:29:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:11.087 16:29:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:11.087 16:29:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:11.345 16:29:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:11.345 16:29:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:11.345 16:29:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:11.345 16:29:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:11.345 16:29:42 -- common/autotest_common.sh@857 -- # local i 00:12:11.345 16:29:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:11.345 16:29:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:11.345 16:29:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:11.345 16:29:42 -- common/autotest_common.sh@861 -- # break 00:12:11.345 16:29:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:11.345 16:29:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:11.345 16:29:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.345 1+0 records in 00:12:11.345 1+0 records out 00:12:11.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614539 s, 6.7 MB/s 00:12:11.345 16:29:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.345 16:29:42 -- common/autotest_common.sh@874 -- # size=4096 00:12:11.345 16:29:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.345 16:29:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:11.345 16:29:42 -- common/autotest_common.sh@877 -- # return 0 00:12:11.345 16:29:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:11.345 16:29:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:11.345 16:29:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:11.602 16:29:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:11.602 16:29:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:11.602 16:29:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:11.602 16:29:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:11.602 16:29:42 -- common/autotest_common.sh@857 -- # local i 00:12:11.602 16:29:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:11.602 16:29:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:11.602 16:29:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:11.602 16:29:42 -- common/autotest_common.sh@861 -- # break 00:12:11.602 16:29:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:11.602 16:29:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:11.602 16:29:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.602 1+0 records in 00:12:11.602 1+0 records out 00:12:11.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432511 s, 9.5 MB/s 00:12:11.602 16:29:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.602 16:29:42 -- common/autotest_common.sh@874 -- # size=4096 00:12:11.602 16:29:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.602 16:29:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:11.602 16:29:42 -- common/autotest_common.sh@877 -- # return 0 00:12:11.602 16:29:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:11.602 16:29:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:11.602 16:29:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:11.859 16:29:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:11.859 16:29:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:11.859 16:29:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:11.859 16:29:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:11.859 16:29:43 -- common/autotest_common.sh@857 -- # local i 00:12:11.859 16:29:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:11.859 16:29:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:11.859 16:29:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:11.859 16:29:43 -- common/autotest_common.sh@861 -- # break 00:12:11.859 16:29:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:11.859 16:29:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:11.859 16:29:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.859 1+0 records in 00:12:11.859 1+0 records out 00:12:11.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591596 s, 6.9 MB/s 00:12:11.859 16:29:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.859 16:29:43 -- common/autotest_common.sh@874 -- # size=4096 00:12:11.859 16:29:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.859 16:29:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:11.859 16:29:43 -- common/autotest_common.sh@877 -- # return 0 00:12:11.859 16:29:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:11.859 16:29:43 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:11.859 16:29:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:12.117 16:29:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:12.117 16:29:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:12.117 16:29:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:12.117 16:29:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:12.117 16:29:43 -- common/autotest_common.sh@857 -- # local i 00:12:12.117 16:29:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:12.117 16:29:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:12.117 16:29:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:12.117 16:29:43 -- common/autotest_common.sh@861 -- # break 00:12:12.117 16:29:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:12.117 16:29:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:12.117 16:29:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.117 1+0 records in 00:12:12.117 1+0 records out 00:12:12.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000739453 s, 5.5 MB/s 00:12:12.117 16:29:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.117 16:29:43 -- common/autotest_common.sh@874 -- # size=4096 00:12:12.117 16:29:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.117 16:29:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:12.117 16:29:43 -- common/autotest_common.sh@877 -- # return 0 00:12:12.117 16:29:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:12.117 16:29:43 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:12.117 16:29:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:12.375 16:29:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:12.375 16:29:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:12.375 16:29:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:12.375 16:29:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:12.375 16:29:43 -- common/autotest_common.sh@857 -- # local i 00:12:12.375 16:29:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:12.375 16:29:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:12.375 16:29:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:12.375 16:29:43 -- common/autotest_common.sh@861 -- # break 00:12:12.375 16:29:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:12.375 16:29:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:12.375 16:29:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.375 1+0 records in 00:12:12.375 1+0 records out 00:12:12.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539656 s, 7.6 MB/s 00:12:12.375 16:29:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.375 16:29:43 -- common/autotest_common.sh@874 -- # size=4096 00:12:12.375 16:29:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.375 16:29:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:12.375 16:29:43 -- common/autotest_common.sh@877 -- # return 0 00:12:12.375 16:29:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:12.375 16:29:43 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:12.375 16:29:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:12.942 16:29:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:12.942 16:29:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:12.942 16:29:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:12.942 16:29:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:12.942 16:29:44 -- common/autotest_common.sh@857 -- # local i 00:12:12.942 16:29:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:12.942 16:29:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:12.942 16:29:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:12.942 16:29:44 -- common/autotest_common.sh@861 -- # break 00:12:12.942 16:29:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:12.942 16:29:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:12.942 16:29:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.942 1+0 records in 00:12:12.942 1+0 records out 00:12:12.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000895245 s, 4.6 MB/s 00:12:12.942 16:29:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.942 16:29:44 -- common/autotest_common.sh@874 -- # size=4096 00:12:12.942 16:29:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.942 16:29:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:12.942 16:29:44 -- common/autotest_common.sh@877 -- # return 0 00:12:12.942 16:29:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:12.942 16:29:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:12.942 16:29:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:13.201 16:29:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:13.201 16:29:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:13.201 16:29:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:13.201 16:29:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:13.201 16:29:44 -- common/autotest_common.sh@857 -- # local i 00:12:13.201 16:29:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:13.201 16:29:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:13.201 16:29:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:13.201 16:29:44 -- common/autotest_common.sh@861 -- # break 00:12:13.201 16:29:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:13.201 16:29:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:13.201 16:29:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.201 1+0 records in 00:12:13.201 1+0 records out 00:12:13.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00086285 s, 4.7 MB/s 00:12:13.201 16:29:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.201 16:29:44 -- common/autotest_common.sh@874 -- # size=4096 00:12:13.201 16:29:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.201 16:29:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:13.201 16:29:44 -- common/autotest_common.sh@877 -- # return 0 00:12:13.201 16:29:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:13.201 16:29:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:13.201 16:29:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:13.460 16:29:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:13.460 16:29:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:13.460 16:29:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:13.460 16:29:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:13.460 16:29:44 -- common/autotest_common.sh@857 -- # local i 00:12:13.460 16:29:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:13.460 16:29:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:13.460 16:29:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:13.460 16:29:44 -- common/autotest_common.sh@861 -- # break 00:12:13.460 16:29:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:13.460 16:29:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:13.460 16:29:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.460 1+0 records in 00:12:13.460 1+0 records out 00:12:13.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000828175 s, 4.9 MB/s 00:12:13.460 16:29:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.460 16:29:44 -- common/autotest_common.sh@874 -- # size=4096 00:12:13.460 16:29:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.460 16:29:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:13.460 16:29:44 -- common/autotest_common.sh@877 -- # return 0 00:12:13.460 16:29:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:13.460 16:29:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:13.460 16:29:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:13.718 16:29:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:13.718 16:29:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:13.718 16:29:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:13.718 16:29:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:13.718 16:29:45 -- common/autotest_common.sh@857 -- # local i 00:12:13.718 16:29:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:13.718 16:29:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:13.718 16:29:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:13.718 16:29:45 -- common/autotest_common.sh@861 -- # break 00:12:13.718 16:29:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:13.718 16:29:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:13.718 16:29:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.718 1+0 records in 00:12:13.718 1+0 records out 00:12:13.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733805 s, 5.6 MB/s 00:12:13.719 16:29:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.719 16:29:45 -- common/autotest_common.sh@874 -- # size=4096 00:12:13.719 16:29:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.719 16:29:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:13.719 16:29:45 -- common/autotest_common.sh@877 -- # return 0 00:12:13.719 16:29:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:13.719 16:29:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:13.719 16:29:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:13.977 16:29:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:13.977 16:29:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:13.977 16:29:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:13.977 16:29:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:13.977 16:29:45 -- common/autotest_common.sh@857 -- # local i 00:12:13.977 16:29:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:13.977 16:29:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:13.977 16:29:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:13.977 16:29:45 -- common/autotest_common.sh@861 -- # break 00:12:13.977 16:29:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:13.977 16:29:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:13.977 16:29:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.977 1+0 records in 00:12:13.977 1+0 records out 00:12:13.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700233 s, 5.8 MB/s 00:12:13.977 16:29:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.977 16:29:45 -- common/autotest_common.sh@874 -- # size=4096 00:12:13.977 16:29:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.977 16:29:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:13.977 16:29:45 -- common/autotest_common.sh@877 -- # return 0 00:12:13.977 16:29:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:13.977 16:29:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:13.977 16:29:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:14.240 16:29:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:14.240 16:29:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:14.240 16:29:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:14.240 16:29:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:14.240 16:29:45 -- common/autotest_common.sh@857 -- # local i 00:12:14.240 16:29:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:14.240 16:29:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:14.240 16:29:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:14.240 16:29:45 -- common/autotest_common.sh@861 -- # break 00:12:14.240 16:29:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:14.240 16:29:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:14.240 16:29:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.240 1+0 records in 00:12:14.240 1+0 records out 00:12:14.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000794455 s, 5.2 MB/s 00:12:14.240 16:29:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.240 16:29:45 -- common/autotest_common.sh@874 -- # size=4096 00:12:14.240 16:29:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.240 16:29:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:14.240 16:29:45 -- common/autotest_common.sh@877 -- # return 0 00:12:14.240 16:29:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:14.240 16:29:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:14.240 16:29:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:14.604 16:29:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:14.604 16:29:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:14.604 16:29:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:14.604 16:29:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:14.604 16:29:45 -- common/autotest_common.sh@857 -- # local i 00:12:14.604 16:29:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:14.604 16:29:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:14.604 16:29:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:14.604 16:29:45 -- common/autotest_common.sh@861 -- # break 00:12:14.604 16:29:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:14.604 16:29:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:14.604 16:29:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.604 1+0 records in 00:12:14.604 1+0 records out 00:12:14.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00214054 s, 1.9 MB/s 00:12:14.604 16:29:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.604 16:29:45 -- common/autotest_common.sh@874 -- # size=4096 00:12:14.604 16:29:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.604 16:29:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:14.604 16:29:45 -- common/autotest_common.sh@877 -- # return 0 00:12:14.604 16:29:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:14.604 16:29:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:14.604 16:29:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:14.862 16:29:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:14.862 16:29:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:14.862 16:29:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:14.862 16:29:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:14.862 16:29:46 -- common/autotest_common.sh@857 -- # local i 00:12:14.862 16:29:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:14.862 16:29:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:14.862 16:29:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:14.862 16:29:46 -- common/autotest_common.sh@861 -- # break 00:12:14.862 16:29:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:14.862 16:29:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:14.862 16:29:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.862 1+0 records in 00:12:14.862 1+0 records out 00:12:14.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000895855 s, 4.6 MB/s 00:12:14.862 16:29:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.862 16:29:46 -- common/autotest_common.sh@874 -- # size=4096 00:12:14.862 16:29:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.862 16:29:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:14.862 16:29:46 -- common/autotest_common.sh@877 -- # return 0 00:12:14.862 16:29:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:14.862 16:29:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:14.862 16:29:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:15.121 16:29:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:15.121 16:29:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:15.121 16:29:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:15.121 16:29:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:15.121 16:29:46 -- common/autotest_common.sh@857 -- # local i 00:12:15.121 16:29:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:15.121 16:29:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:15.121 16:29:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:15.121 16:29:46 -- common/autotest_common.sh@861 -- # break 00:12:15.121 16:29:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:15.121 16:29:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:15.121 16:29:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:15.121 1+0 records in 00:12:15.121 1+0 records out 00:12:15.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010051 s, 4.1 MB/s 00:12:15.121 16:29:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.121 16:29:46 -- common/autotest_common.sh@874 -- # size=4096 00:12:15.121 16:29:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.121 16:29:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:15.121 16:29:46 -- common/autotest_common.sh@877 -- # return 0 00:12:15.121 16:29:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:15.121 16:29:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:15.121 16:29:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:15.378 16:29:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:15.379 16:29:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:15.379 16:29:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:15.379 16:29:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:15.379 16:29:46 -- common/autotest_common.sh@857 -- # local i 00:12:15.379 16:29:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:15.379 16:29:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:15.379 16:29:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:15.379 16:29:46 -- common/autotest_common.sh@861 -- # break 00:12:15.379 16:29:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:15.379 16:29:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:15.379 16:29:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:15.379 1+0 records in 00:12:15.379 1+0 records out 00:12:15.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118359 s, 3.5 MB/s 00:12:15.379 16:29:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.379 16:29:46 -- common/autotest_common.sh@874 -- # size=4096 00:12:15.379 16:29:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.379 16:29:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:15.379 16:29:46 -- common/autotest_common.sh@877 -- # return 0 00:12:15.379 16:29:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:15.379 16:29:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:15.379 16:29:46 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:15.636 16:29:46 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:15.636 { 00:12:15.636 "nbd_device": "/dev/nbd0", 00:12:15.637 "bdev_name": "Malloc0" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd1", 00:12:15.637 "bdev_name": "Malloc1p0" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd2", 00:12:15.637 "bdev_name": "Malloc1p1" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd3", 00:12:15.637 "bdev_name": "Malloc2p0" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd4", 00:12:15.637 "bdev_name": "Malloc2p1" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd5", 00:12:15.637 "bdev_name": "Malloc2p2" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd6", 00:12:15.637 "bdev_name": "Malloc2p3" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd7", 00:12:15.637 "bdev_name": "Malloc2p4" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd8", 00:12:15.637 "bdev_name": "Malloc2p5" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd9", 00:12:15.637 "bdev_name": "Malloc2p6" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd10", 00:12:15.637 "bdev_name": "Malloc2p7" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd11", 00:12:15.637 "bdev_name": "TestPT" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd12", 00:12:15.637 "bdev_name": "raid0" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd13", 00:12:15.637 "bdev_name": "concat0" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd14", 00:12:15.637 "bdev_name": "raid1" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd15", 00:12:15.637 "bdev_name": "AIO0" 00:12:15.637 } 00:12:15.637 ]' 00:12:15.637 16:29:46 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:15.637 16:29:46 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd0", 00:12:15.637 "bdev_name": "Malloc0" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd1", 00:12:15.637 "bdev_name": "Malloc1p0" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd2", 00:12:15.637 "bdev_name": "Malloc1p1" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd3", 00:12:15.637 "bdev_name": "Malloc2p0" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd4", 00:12:15.637 "bdev_name": "Malloc2p1" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd5", 00:12:15.637 "bdev_name": "Malloc2p2" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd6", 00:12:15.637 "bdev_name": "Malloc2p3" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd7", 00:12:15.637 "bdev_name": "Malloc2p4" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd8", 00:12:15.637 "bdev_name": "Malloc2p5" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd9", 00:12:15.637 "bdev_name": "Malloc2p6" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd10", 00:12:15.637 "bdev_name": "Malloc2p7" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd11", 00:12:15.637 "bdev_name": "TestPT" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd12", 00:12:15.637 "bdev_name": "raid0" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd13", 00:12:15.637 "bdev_name": "concat0" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd14", 00:12:15.637 "bdev_name": "raid1" 00:12:15.637 }, 00:12:15.637 { 00:12:15.637 "nbd_device": "/dev/nbd15", 00:12:15.637 "bdev_name": "AIO0" 00:12:15.637 } 00:12:15.637 ]' 00:12:15.637 16:29:46 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:15.637 16:29:47 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:15.637 16:29:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.637 16:29:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:12:15.637 16:29:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:15.637 16:29:47 -- bdev/nbd_common.sh@51 -- # local i 00:12:15.637 16:29:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.637 16:29:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:15.895 16:29:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:15.895 16:29:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:15.895 16:29:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:15.895 16:29:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.895 16:29:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.895 16:29:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.895 16:29:47 -- bdev/nbd_common.sh@41 -- # break 00:12:15.895 16:29:47 -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.895 16:29:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.895 16:29:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:16.152 16:29:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:16.152 16:29:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:16.152 16:29:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:16.152 16:29:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.152 16:29:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.152 16:29:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:16.152 16:29:47 -- bdev/nbd_common.sh@41 -- # break 00:12:16.152 16:29:47 -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.152 16:29:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.152 16:29:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:16.410 16:29:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:16.410 16:29:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:16.410 16:29:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:16.410 16:29:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.410 16:29:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.410 16:29:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:16.410 16:29:47 -- bdev/nbd_common.sh@41 -- # break 00:12:16.410 16:29:47 -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.410 16:29:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.410 16:29:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:16.668 16:29:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:16.668 16:29:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:16.668 16:29:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:16.668 16:29:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.668 16:29:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.668 16:29:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:16.668 16:29:48 -- bdev/nbd_common.sh@41 -- # break 00:12:16.668 16:29:48 -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.668 16:29:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.668 16:29:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:16.926 16:29:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:16.926 16:29:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:16.926 16:29:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:16.926 16:29:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.926 16:29:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.926 16:29:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:16.926 16:29:48 -- bdev/nbd_common.sh@41 -- # break 00:12:16.926 16:29:48 -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.926 16:29:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.926 16:29:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@41 -- # break 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@41 -- # break 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.183 16:29:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:17.441 16:29:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:17.441 16:29:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:17.441 16:29:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:17.441 16:29:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.441 16:29:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.441 16:29:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:17.441 16:29:48 -- bdev/nbd_common.sh@41 -- # break 00:12:17.441 16:29:48 -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.441 16:29:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.441 16:29:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:17.699 16:29:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:17.699 16:29:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:17.699 16:29:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:17.699 16:29:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.699 16:29:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.699 16:29:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:17.699 16:29:49 -- bdev/nbd_common.sh@41 -- # break 00:12:17.699 16:29:49 -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.699 16:29:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.699 16:29:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:17.957 16:29:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:17.957 16:29:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:17.957 16:29:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:17.957 16:29:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.957 16:29:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.957 16:29:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:17.957 16:29:49 -- bdev/nbd_common.sh@41 -- # break 00:12:17.957 16:29:49 -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.957 16:29:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.957 16:29:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:18.214 16:29:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:18.214 16:29:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:18.214 16:29:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:18.214 16:29:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.214 16:29:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.214 16:29:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:18.214 16:29:49 -- bdev/nbd_common.sh@41 -- # break 00:12:18.214 16:29:49 -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.214 16:29:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.214 16:29:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:18.472 16:29:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:18.472 16:29:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:18.472 16:29:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:18.472 16:29:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.472 16:29:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.472 16:29:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:18.472 16:29:49 -- bdev/nbd_common.sh@41 -- # break 00:12:18.472 16:29:49 -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.472 16:29:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.472 16:29:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:18.729 16:29:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:18.729 16:29:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:18.729 16:29:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:18.729 16:29:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.729 16:29:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.729 16:29:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:18.729 16:29:50 -- bdev/nbd_common.sh@41 -- # break 00:12:18.729 16:29:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.729 16:29:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.729 16:29:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:18.987 16:29:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:18.987 16:29:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:18.987 16:29:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:18.987 16:29:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.987 16:29:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.987 16:29:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:18.987 16:29:50 -- bdev/nbd_common.sh@41 -- # break 00:12:18.987 16:29:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.987 16:29:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.987 16:29:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:19.244 16:29:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:19.244 16:29:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:19.244 16:29:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:19.244 16:29:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.244 16:29:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.244 16:29:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:19.244 16:29:50 -- bdev/nbd_common.sh@41 -- # break 00:12:19.244 16:29:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.244 16:29:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.244 16:29:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:19.501 16:29:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:19.501 16:29:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:19.501 16:29:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:19.501 16:29:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.501 16:29:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.501 16:29:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:19.501 16:29:50 -- bdev/nbd_common.sh@41 -- # break 00:12:19.501 16:29:50 -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.501 16:29:50 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:19.501 16:29:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:19.501 16:29:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@65 -- # true 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@65 -- # count=0 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@122 -- # count=0 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@127 -- # return 0 00:12:19.758 16:29:51 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@12 -- # local i 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:19.758 16:29:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:20.016 /dev/nbd0 00:12:20.016 16:29:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:20.016 16:29:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:20.016 16:29:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:20.016 16:29:51 -- common/autotest_common.sh@857 -- # local i 00:12:20.016 16:29:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:20.016 16:29:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:20.016 16:29:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:20.016 16:29:51 -- common/autotest_common.sh@861 -- # break 00:12:20.016 16:29:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:20.016 16:29:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:20.016 16:29:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.016 1+0 records in 00:12:20.016 1+0 records out 00:12:20.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433473 s, 9.4 MB/s 00:12:20.016 16:29:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.016 16:29:51 -- common/autotest_common.sh@874 -- # size=4096 00:12:20.016 16:29:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.016 16:29:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:20.016 16:29:51 -- common/autotest_common.sh@877 -- # return 0 00:12:20.016 16:29:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:20.016 16:29:51 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:20.016 16:29:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:20.274 /dev/nbd1 00:12:20.274 16:29:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:20.274 16:29:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:20.274 16:29:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:20.274 16:29:51 -- common/autotest_common.sh@857 -- # local i 00:12:20.274 16:29:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:20.274 16:29:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:20.274 16:29:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:20.531 16:29:51 -- common/autotest_common.sh@861 -- # break 00:12:20.531 16:29:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:20.531 16:29:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:20.531 16:29:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.531 1+0 records in 00:12:20.531 1+0 records out 00:12:20.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303623 s, 13.5 MB/s 00:12:20.531 16:29:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.531 16:29:51 -- common/autotest_common.sh@874 -- # size=4096 00:12:20.531 16:29:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.531 16:29:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:20.531 16:29:51 -- common/autotest_common.sh@877 -- # return 0 00:12:20.531 16:29:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:20.531 16:29:51 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:20.531 16:29:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:20.531 /dev/nbd10 00:12:20.531 16:29:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:20.531 16:29:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:20.531 16:29:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:20.531 16:29:51 -- common/autotest_common.sh@857 -- # local i 00:12:20.531 16:29:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:20.531 16:29:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:20.531 16:29:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:20.531 16:29:51 -- common/autotest_common.sh@861 -- # break 00:12:20.531 16:29:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:20.531 16:29:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:20.531 16:29:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.531 1+0 records in 00:12:20.531 1+0 records out 00:12:20.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685971 s, 6.0 MB/s 00:12:20.531 16:29:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.531 16:29:51 -- common/autotest_common.sh@874 -- # size=4096 00:12:20.531 16:29:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.791 16:29:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:20.791 16:29:51 -- common/autotest_common.sh@877 -- # return 0 00:12:20.791 16:29:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:20.791 16:29:51 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:20.791 16:29:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:20.791 /dev/nbd11 00:12:21.049 16:29:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:21.049 16:29:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:21.049 16:29:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:21.049 16:29:52 -- common/autotest_common.sh@857 -- # local i 00:12:21.049 16:29:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:21.049 16:29:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:21.049 16:29:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:21.049 16:29:52 -- common/autotest_common.sh@861 -- # break 00:12:21.049 16:29:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:21.049 16:29:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:21.049 16:29:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:21.049 1+0 records in 00:12:21.049 1+0 records out 00:12:21.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432568 s, 9.5 MB/s 00:12:21.049 16:29:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.049 16:29:52 -- common/autotest_common.sh@874 -- # size=4096 00:12:21.049 16:29:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.049 16:29:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:21.049 16:29:52 -- common/autotest_common.sh@877 -- # return 0 00:12:21.049 16:29:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:21.049 16:29:52 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:21.049 16:29:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:21.306 /dev/nbd12 00:12:21.306 16:29:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:21.306 16:29:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:21.306 16:29:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:21.306 16:29:52 -- common/autotest_common.sh@857 -- # local i 00:12:21.306 16:29:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:21.306 16:29:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:21.306 16:29:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:21.306 16:29:52 -- common/autotest_common.sh@861 -- # break 00:12:21.306 16:29:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:21.306 16:29:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:21.306 16:29:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:21.306 1+0 records in 00:12:21.306 1+0 records out 00:12:21.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672132 s, 6.1 MB/s 00:12:21.306 16:29:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.306 16:29:52 -- common/autotest_common.sh@874 -- # size=4096 00:12:21.306 16:29:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.306 16:29:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:21.306 16:29:52 -- common/autotest_common.sh@877 -- # return 0 00:12:21.306 16:29:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:21.306 16:29:52 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:21.306 16:29:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:21.562 /dev/nbd13 00:12:21.562 16:29:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:21.562 16:29:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:21.562 16:29:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:21.562 16:29:52 -- common/autotest_common.sh@857 -- # local i 00:12:21.562 16:29:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:21.562 16:29:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:21.562 16:29:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:21.562 16:29:52 -- common/autotest_common.sh@861 -- # break 00:12:21.562 16:29:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:21.562 16:29:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:21.562 16:29:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:21.562 1+0 records in 00:12:21.562 1+0 records out 00:12:21.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443286 s, 9.2 MB/s 00:12:21.562 16:29:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.562 16:29:52 -- common/autotest_common.sh@874 -- # size=4096 00:12:21.562 16:29:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.562 16:29:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:21.562 16:29:52 -- common/autotest_common.sh@877 -- # return 0 00:12:21.562 16:29:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:21.562 16:29:52 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:21.562 16:29:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:21.819 /dev/nbd14 00:12:21.819 16:29:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:21.819 16:29:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:21.819 16:29:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:21.819 16:29:53 -- common/autotest_common.sh@857 -- # local i 00:12:21.819 16:29:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:21.819 16:29:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:21.819 16:29:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:21.819 16:29:53 -- common/autotest_common.sh@861 -- # break 00:12:21.819 16:29:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:21.820 16:29:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:21.820 16:29:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:21.820 1+0 records in 00:12:21.820 1+0 records out 00:12:21.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410318 s, 10.0 MB/s 00:12:21.820 16:29:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.820 16:29:53 -- common/autotest_common.sh@874 -- # size=4096 00:12:21.820 16:29:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.820 16:29:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:21.820 16:29:53 -- common/autotest_common.sh@877 -- # return 0 00:12:21.820 16:29:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:21.820 16:29:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:21.820 16:29:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:22.077 /dev/nbd15 00:12:22.077 16:29:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:22.077 16:29:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:22.077 16:29:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:22.077 16:29:53 -- common/autotest_common.sh@857 -- # local i 00:12:22.077 16:29:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:22.077 16:29:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:22.077 16:29:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:22.334 16:29:53 -- common/autotest_common.sh@861 -- # break 00:12:22.334 16:29:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:22.334 16:29:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:22.334 16:29:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.334 1+0 records in 00:12:22.334 1+0 records out 00:12:22.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652181 s, 6.3 MB/s 00:12:22.334 16:29:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.334 16:29:53 -- common/autotest_common.sh@874 -- # size=4096 00:12:22.334 16:29:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.334 16:29:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:22.334 16:29:53 -- common/autotest_common.sh@877 -- # return 0 00:12:22.334 16:29:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.334 16:29:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:22.334 16:29:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:22.592 /dev/nbd2 00:12:22.592 16:29:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:22.592 16:29:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:22.593 16:29:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:22.593 16:29:53 -- common/autotest_common.sh@857 -- # local i 00:12:22.593 16:29:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:22.593 16:29:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:22.593 16:29:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:22.593 16:29:53 -- common/autotest_common.sh@861 -- # break 00:12:22.593 16:29:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:22.593 16:29:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:22.593 16:29:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.593 1+0 records in 00:12:22.593 1+0 records out 00:12:22.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731587 s, 5.6 MB/s 00:12:22.593 16:29:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.593 16:29:53 -- common/autotest_common.sh@874 -- # size=4096 00:12:22.593 16:29:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.593 16:29:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:22.593 16:29:53 -- common/autotest_common.sh@877 -- # return 0 00:12:22.593 16:29:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.593 16:29:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:22.593 16:29:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:22.850 /dev/nbd3 00:12:22.850 16:29:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:22.850 16:29:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:22.850 16:29:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:22.850 16:29:54 -- common/autotest_common.sh@857 -- # local i 00:12:22.850 16:29:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:22.850 16:29:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:22.850 16:29:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:22.850 16:29:54 -- common/autotest_common.sh@861 -- # break 00:12:22.850 16:29:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:22.850 16:29:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:22.850 16:29:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.850 1+0 records in 00:12:22.850 1+0 records out 00:12:22.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716208 s, 5.7 MB/s 00:12:22.851 16:29:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.851 16:29:54 -- common/autotest_common.sh@874 -- # size=4096 00:12:22.851 16:29:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.851 16:29:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:22.851 16:29:54 -- common/autotest_common.sh@877 -- # return 0 00:12:22.851 16:29:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.851 16:29:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:22.851 16:29:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:23.109 /dev/nbd4 00:12:23.109 16:29:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:23.109 16:29:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:23.109 16:29:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:23.109 16:29:54 -- common/autotest_common.sh@857 -- # local i 00:12:23.109 16:29:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:23.109 16:29:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:23.109 16:29:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:23.109 16:29:54 -- common/autotest_common.sh@861 -- # break 00:12:23.109 16:29:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:23.109 16:29:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:23.109 16:29:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.109 1+0 records in 00:12:23.109 1+0 records out 00:12:23.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000786611 s, 5.2 MB/s 00:12:23.109 16:29:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.109 16:29:54 -- common/autotest_common.sh@874 -- # size=4096 00:12:23.109 16:29:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.109 16:29:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:23.109 16:29:54 -- common/autotest_common.sh@877 -- # return 0 00:12:23.109 16:29:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.109 16:29:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:23.109 16:29:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:23.367 /dev/nbd5 00:12:23.367 16:29:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:23.367 16:29:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:23.367 16:29:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:23.367 16:29:54 -- common/autotest_common.sh@857 -- # local i 00:12:23.367 16:29:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:23.367 16:29:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:23.367 16:29:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:23.367 16:29:54 -- common/autotest_common.sh@861 -- # break 00:12:23.367 16:29:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:23.367 16:29:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:23.367 16:29:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.367 1+0 records in 00:12:23.367 1+0 records out 00:12:23.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000880609 s, 4.7 MB/s 00:12:23.367 16:29:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.367 16:29:54 -- common/autotest_common.sh@874 -- # size=4096 00:12:23.367 16:29:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.367 16:29:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:23.367 16:29:54 -- common/autotest_common.sh@877 -- # return 0 00:12:23.367 16:29:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.367 16:29:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:23.367 16:29:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:23.625 /dev/nbd6 00:12:23.625 16:29:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:23.625 16:29:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:23.625 16:29:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:23.625 16:29:54 -- common/autotest_common.sh@857 -- # local i 00:12:23.625 16:29:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:23.625 16:29:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:23.625 16:29:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:23.625 16:29:54 -- common/autotest_common.sh@861 -- # break 00:12:23.625 16:29:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:23.625 16:29:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:23.625 16:29:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.625 1+0 records in 00:12:23.625 1+0 records out 00:12:23.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119787 s, 3.4 MB/s 00:12:23.625 16:29:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.625 16:29:54 -- common/autotest_common.sh@874 -- # size=4096 00:12:23.625 16:29:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.625 16:29:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:23.626 16:29:54 -- common/autotest_common.sh@877 -- # return 0 00:12:23.626 16:29:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.626 16:29:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:23.626 16:29:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:23.883 /dev/nbd7 00:12:23.883 16:29:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:23.883 16:29:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:23.883 16:29:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:23.883 16:29:55 -- common/autotest_common.sh@857 -- # local i 00:12:23.883 16:29:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:23.883 16:29:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:23.883 16:29:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:23.883 16:29:55 -- common/autotest_common.sh@861 -- # break 00:12:23.883 16:29:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:23.883 16:29:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:23.883 16:29:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.883 1+0 records in 00:12:23.883 1+0 records out 00:12:23.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00089204 s, 4.6 MB/s 00:12:23.883 16:29:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.883 16:29:55 -- common/autotest_common.sh@874 -- # size=4096 00:12:23.883 16:29:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.883 16:29:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:23.883 16:29:55 -- common/autotest_common.sh@877 -- # return 0 00:12:23.883 16:29:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.883 16:29:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:23.883 16:29:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:24.141 /dev/nbd8 00:12:24.141 16:29:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:24.141 16:29:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:24.141 16:29:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:24.141 16:29:55 -- common/autotest_common.sh@857 -- # local i 00:12:24.141 16:29:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:24.141 16:29:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:24.141 16:29:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:24.141 16:29:55 -- common/autotest_common.sh@861 -- # break 00:12:24.141 16:29:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:24.141 16:29:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:24.141 16:29:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.141 1+0 records in 00:12:24.141 1+0 records out 00:12:24.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00310736 s, 1.3 MB/s 00:12:24.141 16:29:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.141 16:29:55 -- common/autotest_common.sh@874 -- # size=4096 00:12:24.141 16:29:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.141 16:29:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:24.141 16:29:55 -- common/autotest_common.sh@877 -- # return 0 00:12:24.141 16:29:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.141 16:29:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:24.141 16:29:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:24.400 /dev/nbd9 00:12:24.400 16:29:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:24.400 16:29:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:24.400 16:29:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:24.400 16:29:55 -- common/autotest_common.sh@857 -- # local i 00:12:24.400 16:29:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:24.400 16:29:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:24.400 16:29:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:24.400 16:29:55 -- common/autotest_common.sh@861 -- # break 00:12:24.400 16:29:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:24.400 16:29:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:24.400 16:29:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.400 1+0 records in 00:12:24.400 1+0 records out 00:12:24.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135351 s, 3.0 MB/s 00:12:24.400 16:29:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.400 16:29:55 -- common/autotest_common.sh@874 -- # size=4096 00:12:24.400 16:29:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.400 16:29:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:24.400 16:29:55 -- common/autotest_common.sh@877 -- # return 0 00:12:24.400 16:29:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.400 16:29:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:24.400 16:29:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:24.400 16:29:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:24.400 16:29:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd0", 00:12:24.658 "bdev_name": "Malloc0" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd1", 00:12:24.658 "bdev_name": "Malloc1p0" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd10", 00:12:24.658 "bdev_name": "Malloc1p1" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd11", 00:12:24.658 "bdev_name": "Malloc2p0" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd12", 00:12:24.658 "bdev_name": "Malloc2p1" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd13", 00:12:24.658 "bdev_name": "Malloc2p2" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd14", 00:12:24.658 "bdev_name": "Malloc2p3" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd15", 00:12:24.658 "bdev_name": "Malloc2p4" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd2", 00:12:24.658 "bdev_name": "Malloc2p5" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd3", 00:12:24.658 "bdev_name": "Malloc2p6" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd4", 00:12:24.658 "bdev_name": "Malloc2p7" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd5", 00:12:24.658 "bdev_name": "TestPT" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd6", 00:12:24.658 "bdev_name": "raid0" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd7", 00:12:24.658 "bdev_name": "concat0" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd8", 00:12:24.658 "bdev_name": "raid1" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd9", 00:12:24.658 "bdev_name": "AIO0" 00:12:24.658 } 00:12:24.658 ]' 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd0", 00:12:24.658 "bdev_name": "Malloc0" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd1", 00:12:24.658 "bdev_name": "Malloc1p0" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd10", 00:12:24.658 "bdev_name": "Malloc1p1" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd11", 00:12:24.658 "bdev_name": "Malloc2p0" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd12", 00:12:24.658 "bdev_name": "Malloc2p1" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd13", 00:12:24.658 "bdev_name": "Malloc2p2" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd14", 00:12:24.658 "bdev_name": "Malloc2p3" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd15", 00:12:24.658 "bdev_name": "Malloc2p4" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd2", 00:12:24.658 "bdev_name": "Malloc2p5" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd3", 00:12:24.658 "bdev_name": "Malloc2p6" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd4", 00:12:24.658 "bdev_name": "Malloc2p7" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd5", 00:12:24.658 "bdev_name": "TestPT" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd6", 00:12:24.658 "bdev_name": "raid0" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd7", 00:12:24.658 "bdev_name": "concat0" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd8", 00:12:24.658 "bdev_name": "raid1" 00:12:24.658 }, 00:12:24.658 { 00:12:24.658 "nbd_device": "/dev/nbd9", 00:12:24.658 "bdev_name": "AIO0" 00:12:24.658 } 00:12:24.658 ]' 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:24.658 /dev/nbd1 00:12:24.658 /dev/nbd10 00:12:24.658 /dev/nbd11 00:12:24.658 /dev/nbd12 00:12:24.658 /dev/nbd13 00:12:24.658 /dev/nbd14 00:12:24.658 /dev/nbd15 00:12:24.658 /dev/nbd2 00:12:24.658 /dev/nbd3 00:12:24.658 /dev/nbd4 00:12:24.658 /dev/nbd5 00:12:24.658 /dev/nbd6 00:12:24.658 /dev/nbd7 00:12:24.658 /dev/nbd8 00:12:24.658 /dev/nbd9' 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:24.658 /dev/nbd1 00:12:24.658 /dev/nbd10 00:12:24.658 /dev/nbd11 00:12:24.658 /dev/nbd12 00:12:24.658 /dev/nbd13 00:12:24.658 /dev/nbd14 00:12:24.658 /dev/nbd15 00:12:24.658 /dev/nbd2 00:12:24.658 /dev/nbd3 00:12:24.658 /dev/nbd4 00:12:24.658 /dev/nbd5 00:12:24.658 /dev/nbd6 00:12:24.658 /dev/nbd7 00:12:24.658 /dev/nbd8 00:12:24.658 /dev/nbd9' 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@65 -- # count=16 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@66 -- # echo 16 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@95 -- # count=16 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:24.658 16:29:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:24.916 256+0 records in 00:12:24.916 256+0 records out 00:12:24.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00850898 s, 123 MB/s 00:12:24.916 16:29:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:24.916 16:29:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:24.916 256+0 records in 00:12:24.916 256+0 records out 00:12:24.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151746 s, 6.9 MB/s 00:12:24.916 16:29:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:24.916 16:29:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:25.173 256+0 records in 00:12:25.173 256+0 records out 00:12:25.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154272 s, 6.8 MB/s 00:12:25.173 16:29:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:25.173 16:29:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:25.173 256+0 records in 00:12:25.173 256+0 records out 00:12:25.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153891 s, 6.8 MB/s 00:12:25.173 16:29:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:25.173 16:29:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:25.431 256+0 records in 00:12:25.431 256+0 records out 00:12:25.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15397 s, 6.8 MB/s 00:12:25.431 16:29:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:25.431 16:29:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:25.689 256+0 records in 00:12:25.689 256+0 records out 00:12:25.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155867 s, 6.7 MB/s 00:12:25.689 16:29:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:25.689 16:29:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:25.689 256+0 records in 00:12:25.689 256+0 records out 00:12:25.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157918 s, 6.6 MB/s 00:12:25.689 16:29:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:25.689 16:29:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:25.946 256+0 records in 00:12:25.946 256+0 records out 00:12:25.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153024 s, 6.9 MB/s 00:12:25.946 16:29:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:25.946 16:29:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:25.946 256+0 records in 00:12:25.946 256+0 records out 00:12:25.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153759 s, 6.8 MB/s 00:12:25.946 16:29:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:25.946 16:29:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:26.202 256+0 records in 00:12:26.202 256+0 records out 00:12:26.202 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154164 s, 6.8 MB/s 00:12:26.202 16:29:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.202 16:29:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:26.467 256+0 records in 00:12:26.467 256+0 records out 00:12:26.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154911 s, 6.8 MB/s 00:12:26.467 16:29:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.467 16:29:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:26.467 256+0 records in 00:12:26.467 256+0 records out 00:12:26.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157338 s, 6.7 MB/s 00:12:26.467 16:29:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.467 16:29:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:26.741 256+0 records in 00:12:26.741 256+0 records out 00:12:26.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160983 s, 6.5 MB/s 00:12:26.741 16:29:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.741 16:29:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:27.002 256+0 records in 00:12:27.002 256+0 records out 00:12:27.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160032 s, 6.6 MB/s 00:12:27.002 16:29:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:27.002 16:29:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:27.002 256+0 records in 00:12:27.002 256+0 records out 00:12:27.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155673 s, 6.7 MB/s 00:12:27.002 16:29:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:27.002 16:29:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:27.259 256+0 records in 00:12:27.259 256+0 records out 00:12:27.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159721 s, 6.6 MB/s 00:12:27.259 16:29:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:27.259 16:29:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:27.517 256+0 records in 00:12:27.517 256+0 records out 00:12:27.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191996 s, 5.5 MB/s 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@51 -- # local i 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.517 16:29:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:27.775 16:29:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:27.775 16:29:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:27.775 16:29:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:27.775 16:29:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.775 16:29:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.775 16:29:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:27.775 16:29:59 -- bdev/nbd_common.sh@41 -- # break 00:12:27.775 16:29:59 -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.775 16:29:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.775 16:29:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:28.033 16:29:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:28.033 16:29:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:28.033 16:29:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:28.033 16:29:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.033 16:29:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.033 16:29:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:28.033 16:29:59 -- bdev/nbd_common.sh@41 -- # break 00:12:28.033 16:29:59 -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.033 16:29:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.033 16:29:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:28.291 16:29:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:28.291 16:29:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:28.291 16:29:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:28.291 16:29:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.291 16:29:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.291 16:29:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:28.291 16:29:59 -- bdev/nbd_common.sh@41 -- # break 00:12:28.291 16:29:59 -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.291 16:29:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.291 16:29:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:28.548 16:29:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:28.548 16:30:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:28.548 16:30:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:28.548 16:30:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.548 16:30:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.548 16:30:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:28.548 16:30:00 -- bdev/nbd_common.sh@41 -- # break 00:12:28.548 16:30:00 -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.548 16:30:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.548 16:30:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:28.805 16:30:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:29.063 16:30:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:29.063 16:30:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:29.063 16:30:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.063 16:30:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.063 16:30:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:29.063 16:30:00 -- bdev/nbd_common.sh@41 -- # break 00:12:29.063 16:30:00 -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.063 16:30:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.063 16:30:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:29.321 16:30:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:29.321 16:30:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:29.321 16:30:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:29.321 16:30:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.321 16:30:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.321 16:30:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:29.321 16:30:00 -- bdev/nbd_common.sh@41 -- # break 00:12:29.321 16:30:00 -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.321 16:30:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.321 16:30:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:29.578 16:30:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:29.578 16:30:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:29.578 16:30:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:29.578 16:30:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.578 16:30:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.578 16:30:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:29.578 16:30:00 -- bdev/nbd_common.sh@41 -- # break 00:12:29.578 16:30:00 -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.578 16:30:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.578 16:30:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:29.835 16:30:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:29.835 16:30:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:29.835 16:30:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:29.835 16:30:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.835 16:30:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.835 16:30:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:29.835 16:30:01 -- bdev/nbd_common.sh@41 -- # break 00:12:29.835 16:30:01 -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.835 16:30:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.835 16:30:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:30.092 16:30:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:30.092 16:30:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:30.092 16:30:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:30.092 16:30:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.092 16:30:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.092 16:30:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:30.092 16:30:01 -- bdev/nbd_common.sh@41 -- # break 00:12:30.092 16:30:01 -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.092 16:30:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.092 16:30:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:30.348 16:30:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:30.348 16:30:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:30.348 16:30:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:30.348 16:30:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.348 16:30:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.348 16:30:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:30.348 16:30:01 -- bdev/nbd_common.sh@41 -- # break 00:12:30.348 16:30:01 -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.348 16:30:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.349 16:30:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:30.605 16:30:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:30.605 16:30:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:30.605 16:30:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:30.605 16:30:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.605 16:30:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.605 16:30:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:30.605 16:30:01 -- bdev/nbd_common.sh@41 -- # break 00:12:30.605 16:30:01 -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.605 16:30:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.605 16:30:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:30.861 16:30:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:30.861 16:30:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:30.861 16:30:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:30.861 16:30:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.861 16:30:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.861 16:30:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:30.861 16:30:02 -- bdev/nbd_common.sh@41 -- # break 00:12:30.861 16:30:02 -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.861 16:30:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.861 16:30:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:31.117 16:30:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:31.117 16:30:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:31.117 16:30:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:31.117 16:30:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.117 16:30:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.117 16:30:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:31.117 16:30:02 -- bdev/nbd_common.sh@41 -- # break 00:12:31.117 16:30:02 -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.117 16:30:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.117 16:30:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:31.117 16:30:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@41 -- # break 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@41 -- # break 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.373 16:30:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:31.629 16:30:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:31.629 16:30:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:31.629 16:30:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:31.629 16:30:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.629 16:30:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.629 16:30:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:31.629 16:30:02 -- bdev/nbd_common.sh@41 -- # break 00:12:31.629 16:30:02 -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.629 16:30:02 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:31.629 16:30:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.629 16:30:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@65 -- # true 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@65 -- # count=0 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@104 -- # count=0 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@109 -- # return 0 00:12:31.886 16:30:03 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:31.886 16:30:03 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:32.144 malloc_lvol_verify 00:12:32.144 16:30:03 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:32.401 a93aea41-baa4-4c05-81ad-de5306b0b4b7 00:12:32.401 16:30:03 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:32.658 c50eff10-7e63-46a6-921c-af379adea771 00:12:32.658 16:30:03 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:32.916 /dev/nbd0 00:12:32.916 16:30:04 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:32.916 mke2fs 1.46.5 (30-Dec-2021) 00:12:32.916 00:12:32.916 Filesystem too small for a journal 00:12:32.916 Discarding device blocks: 0/1024 done 00:12:32.916 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:32.916 00:12:32.916 Allocating group tables: 0/1 done 00:12:32.916 Writing inode tables: 0/1 done 00:12:32.916 Writing superblocks and filesystem accounting information: 0/1 done 00:12:32.916 00:12:32.916 16:30:04 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:32.916 16:30:04 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:32.916 16:30:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.916 16:30:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:32.916 16:30:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:32.916 16:30:04 -- bdev/nbd_common.sh@51 -- # local i 00:12:32.916 16:30:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.916 16:30:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:33.174 16:30:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:33.174 16:30:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:33.174 16:30:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:33.174 16:30:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.174 16:30:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.174 16:30:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:33.174 16:30:04 -- bdev/nbd_common.sh@41 -- # break 00:12:33.174 16:30:04 -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.174 16:30:04 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:33.174 16:30:04 -- bdev/nbd_common.sh@147 -- # return 0 00:12:33.174 16:30:04 -- bdev/blockdev.sh@324 -- # killprocess 119798 00:12:33.174 16:30:04 -- common/autotest_common.sh@926 -- # '[' -z 119798 ']' 00:12:33.174 16:30:04 -- common/autotest_common.sh@930 -- # kill -0 119798 00:12:33.174 16:30:04 -- common/autotest_common.sh@931 -- # uname 00:12:33.174 16:30:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:33.174 16:30:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119798 00:12:33.174 16:30:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:33.174 16:30:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:33.174 16:30:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119798' 00:12:33.174 killing process with pid 119798 00:12:33.174 16:30:04 -- common/autotest_common.sh@945 -- # kill 119798 00:12:33.174 16:30:04 -- common/autotest_common.sh@950 -- # wait 119798 00:12:33.739 16:30:05 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:33.739 00:12:33.739 real 0m23.934s 00:12:33.739 user 0m30.538s 00:12:33.739 sys 0m11.798s 00:12:33.739 16:30:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.739 16:30:05 -- common/autotest_common.sh@10 -- # set +x 00:12:33.739 ************************************ 00:12:33.739 END TEST bdev_nbd 00:12:33.739 ************************************ 00:12:33.739 16:30:05 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:33.739 16:30:05 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:12:33.739 16:30:05 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:12:33.739 16:30:05 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:33.739 16:30:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:33.739 16:30:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:33.739 16:30:05 -- common/autotest_common.sh@10 -- # set +x 00:12:33.739 ************************************ 00:12:33.739 START TEST bdev_fio 00:12:33.739 ************************************ 00:12:33.739 16:30:05 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:12:33.739 16:30:05 -- bdev/blockdev.sh@329 -- # local env_context 00:12:33.739 16:30:05 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:33.739 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:33.739 16:30:05 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:33.739 16:30:05 -- bdev/blockdev.sh@337 -- # echo '' 00:12:33.739 16:30:05 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:33.997 16:30:05 -- bdev/blockdev.sh@337 -- # env_context= 00:12:33.997 16:30:05 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:33.997 16:30:05 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:33.997 16:30:05 -- common/autotest_common.sh@1260 -- # local workload=verify 00:12:33.997 16:30:05 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:12:33.997 16:30:05 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:33.997 16:30:05 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:33.997 16:30:05 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:33.997 16:30:05 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:12:33.997 16:30:05 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:33.997 16:30:05 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:33.997 16:30:05 -- common/autotest_common.sh@1280 -- # cat 00:12:33.997 16:30:05 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:12:33.997 16:30:05 -- common/autotest_common.sh@1293 -- # cat 00:12:33.997 16:30:05 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:12:33.997 16:30:05 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:12:33.997 16:30:05 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:33.997 16:30:05 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:12:33.997 16:30:05 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:33.997 16:30:05 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:12:33.997 16:30:05 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:33.997 16:30:05 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:33.997 16:30:05 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:33.997 16:30:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:33.997 16:30:05 -- common/autotest_common.sh@10 -- # set +x 00:12:33.997 ************************************ 00:12:33.997 START TEST bdev_fio_rw_verify 00:12:33.997 ************************************ 00:12:33.998 16:30:05 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:33.998 16:30:05 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:33.998 16:30:05 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:33.998 16:30:05 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:33.998 16:30:05 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:33.998 16:30:05 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:33.998 16:30:05 -- common/autotest_common.sh@1320 -- # shift 00:12:33.998 16:30:05 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:33.998 16:30:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:33.998 16:30:05 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:33.998 16:30:05 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:33.998 16:30:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:33.998 16:30:05 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:33.998 16:30:05 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:33.998 16:30:05 -- common/autotest_common.sh@1326 -- # break 00:12:33.998 16:30:05 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:33.998 16:30:05 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:34.256 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:34.256 fio-3.35 00:12:34.256 Starting 16 threads 00:12:46.450 00:12:46.450 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=120951: Sat Jul 13 16:30:16 2024 00:12:46.450 read: IOPS=83.1k, BW=325MiB/s (340MB/s)(3250MiB/10009msec) 00:12:46.450 slat (nsec): min=1777, max=40028k, avg=32634.76, stdev=419673.91 00:12:46.450 clat (usec): min=8, max=52307, avg=275.30, stdev=1266.61 00:12:46.450 lat (usec): min=21, max=52328, avg=307.93, stdev=1333.72 00:12:46.450 clat percentiles (usec): 00:12:46.450 | 50.000th=[ 163], 99.000th=[ 709], 99.900th=[16450], 99.990th=[32113], 00:12:46.450 | 99.999th=[48497] 00:12:46.450 write: IOPS=131k, BW=511MiB/s (536MB/s)(5054MiB/9897msec); 0 zone resets 00:12:46.450 slat (usec): min=4, max=60037, avg=61.14, stdev=649.69 00:12:46.450 clat (usec): min=8, max=80283, avg=360.62, stdev=1544.84 00:12:46.450 lat (usec): min=32, max=80308, avg=421.76, stdev=1677.01 00:12:46.450 clat percentiles (usec): 00:12:46.450 | 50.000th=[ 204], 99.000th=[ 4146], 99.900th=[20579], 99.990th=[37487], 00:12:46.450 | 99.999th=[56886] 00:12:46.450 bw ( KiB/s): min=297304, max=854923, per=99.42%, avg=519911.53, stdev=9377.75, samples=304 00:12:46.450 iops : min=74326, max=213730, avg=129977.79, stdev=2344.44, samples=304 00:12:46.450 lat (usec) : 10=0.01%, 20=0.01%, 50=0.74%, 100=13.70%, 250=59.50% 00:12:46.450 lat (usec) : 500=22.64%, 750=1.99%, 1000=0.32% 00:12:46.450 lat (msec) : 2=0.14%, 4=0.07%, 10=0.21%, 20=0.57%, 50=0.10% 00:12:46.450 lat (msec) : 100=0.01% 00:12:46.450 cpu : usr=55.68%, sys=2.33%, ctx=263477, majf=2, minf=99952 00:12:46.450 IO depths : 1=11.3%, 2=23.8%, 4=51.8%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:46.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.450 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.450 issued rwts: total=831904,1293933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.450 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:46.450 00:12:46.450 Run status group 0 (all jobs): 00:12:46.450 READ: bw=325MiB/s (340MB/s), 325MiB/s-325MiB/s (340MB/s-340MB/s), io=3250MiB (3407MB), run=10009-10009msec 00:12:46.450 WRITE: bw=511MiB/s (536MB/s), 511MiB/s-511MiB/s (536MB/s-536MB/s), io=5054MiB (5300MB), run=9897-9897msec 00:12:46.450 ----------------------------------------------------- 00:12:46.450 Suppressions used: 00:12:46.450 count bytes template 00:12:46.450 16 140 /usr/src/fio/parse.c 00:12:46.450 10843 1040928 /usr/src/fio/iolog.c 00:12:46.450 1 904 libcrypto.so 00:12:46.450 ----------------------------------------------------- 00:12:46.450 00:12:46.450 00:12:46.450 real 0m12.099s 00:12:46.451 user 1m31.964s 00:12:46.451 sys 0m4.847s 00:12:46.451 16:30:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.451 16:30:17 -- common/autotest_common.sh@10 -- # set +x 00:12:46.451 ************************************ 00:12:46.451 END TEST bdev_fio_rw_verify 00:12:46.451 ************************************ 00:12:46.451 16:30:17 -- bdev/blockdev.sh@348 -- # rm -f 00:12:46.451 16:30:17 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:46.451 16:30:17 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:46.451 16:30:17 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:46.451 16:30:17 -- common/autotest_common.sh@1260 -- # local workload=trim 00:12:46.451 16:30:17 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:12:46.451 16:30:17 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:46.451 16:30:17 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:46.451 16:30:17 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:46.451 16:30:17 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:12:46.451 16:30:17 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:46.451 16:30:17 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:46.451 16:30:17 -- common/autotest_common.sh@1280 -- # cat 00:12:46.451 16:30:17 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:12:46.451 16:30:17 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:12:46.451 16:30:17 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:12:46.451 16:30:17 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:46.452 16:30:17 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "fcf338be-7e14-491d-8a54-bc7aec556e2f"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fcf338be-7e14-491d-8a54-bc7aec556e2f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "99e94d71-015d-5b85-9bc1-2bf35bf81143"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "99e94d71-015d-5b85-9bc1-2bf35bf81143",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "3ad384c5-fd74-52bf-b05e-77d66546cc6e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3ad384c5-fd74-52bf-b05e-77d66546cc6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "ec9ec735-c2a7-5e69-9200-08fa49e16821"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ec9ec735-c2a7-5e69-9200-08fa49e16821",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "614ec0c3-3313-534f-813e-9cfd240bd70a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "614ec0c3-3313-534f-813e-9cfd240bd70a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "81b531af-3946-54b2-9091-9cc59441039b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "81b531af-3946-54b2-9091-9cc59441039b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "72c30180-9dae-54a1-a1cc-a48ba9970d2e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "72c30180-9dae-54a1-a1cc-a48ba9970d2e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cad12f55-2337-5b33-96c5-aee5605882f2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cad12f55-2337-5b33-96c5-aee5605882f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "6decb2d6-c69a-5533-a639-75b064ec36c7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6decb2d6-c69a-5533-a639-75b064ec36c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "807a2c63-2349-5f11-9e6b-e17532a6259d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "807a2c63-2349-5f11-9e6b-e17532a6259d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "8a7abe1d-a517-5740-9aa3-edd0a119568f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8a7abe1d-a517-5740-9aa3-edd0a119568f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "7431707f-80b0-5bea-b7f4-1bd557cb8ea5"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "7431707f-80b0-5bea-b7f4-1bd557cb8ea5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "dcf368c1-c4fd-4ff4-b396-aa7a044d6670"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dcf368c1-c4fd-4ff4-b396-aa7a044d6670",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "dcf368c1-c4fd-4ff4-b396-aa7a044d6670",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "ebab76cf-6be8-4272-8fa3-6125006d53a4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "63627c1b-1944-49f7-97b8-eaadda54d413",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "68b6f2b0-0d7f-4ba7-9d19-7dd7198528e0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "68b6f2b0-0d7f-4ba7-9d19-7dd7198528e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "68b6f2b0-0d7f-4ba7-9d19-7dd7198528e0",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "3ae902fe-30d8-49b4-9c62-7f4b4612992f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "1ef59058-fc5d-4b21-a595-2e8764525835",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d4e9636d-bb3c-4e95-80a8-ce82ec2a6751"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d4e9636d-bb3c-4e95-80a8-ce82ec2a6751",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d4e9636d-bb3c-4e95-80a8-ce82ec2a6751",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "411caae0-e161-4b85-93af-bc1596f5cc97",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "d7629de6-eb7b-4fe9-839b-70d0f4012984",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "ec010262-a5e0-42e2-8096-85e914f96470"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "ec010262-a5e0-42e2-8096-85e914f96470",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:46.452 16:30:17 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:12:46.452 Malloc1p0 00:12:46.452 Malloc1p1 00:12:46.452 Malloc2p0 00:12:46.452 Malloc2p1 00:12:46.452 Malloc2p2 00:12:46.452 Malloc2p3 00:12:46.452 Malloc2p4 00:12:46.452 Malloc2p5 00:12:46.452 Malloc2p6 00:12:46.452 Malloc2p7 00:12:46.452 TestPT 00:12:46.452 raid0 00:12:46.452 concat0 ]] 00:12:46.452 16:30:17 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "fcf338be-7e14-491d-8a54-bc7aec556e2f"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fcf338be-7e14-491d-8a54-bc7aec556e2f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "99e94d71-015d-5b85-9bc1-2bf35bf81143"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "99e94d71-015d-5b85-9bc1-2bf35bf81143",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "3ad384c5-fd74-52bf-b05e-77d66546cc6e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3ad384c5-fd74-52bf-b05e-77d66546cc6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "ec9ec735-c2a7-5e69-9200-08fa49e16821"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ec9ec735-c2a7-5e69-9200-08fa49e16821",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "614ec0c3-3313-534f-813e-9cfd240bd70a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "614ec0c3-3313-534f-813e-9cfd240bd70a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "81b531af-3946-54b2-9091-9cc59441039b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "81b531af-3946-54b2-9091-9cc59441039b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "72c30180-9dae-54a1-a1cc-a48ba9970d2e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "72c30180-9dae-54a1-a1cc-a48ba9970d2e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cad12f55-2337-5b33-96c5-aee5605882f2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cad12f55-2337-5b33-96c5-aee5605882f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "6decb2d6-c69a-5533-a639-75b064ec36c7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6decb2d6-c69a-5533-a639-75b064ec36c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "807a2c63-2349-5f11-9e6b-e17532a6259d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "807a2c63-2349-5f11-9e6b-e17532a6259d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "8a7abe1d-a517-5740-9aa3-edd0a119568f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8a7abe1d-a517-5740-9aa3-edd0a119568f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "7431707f-80b0-5bea-b7f4-1bd557cb8ea5"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "7431707f-80b0-5bea-b7f4-1bd557cb8ea5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "dcf368c1-c4fd-4ff4-b396-aa7a044d6670"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dcf368c1-c4fd-4ff4-b396-aa7a044d6670",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "dcf368c1-c4fd-4ff4-b396-aa7a044d6670",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "ebab76cf-6be8-4272-8fa3-6125006d53a4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "63627c1b-1944-49f7-97b8-eaadda54d413",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "68b6f2b0-0d7f-4ba7-9d19-7dd7198528e0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "68b6f2b0-0d7f-4ba7-9d19-7dd7198528e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "68b6f2b0-0d7f-4ba7-9d19-7dd7198528e0",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "3ae902fe-30d8-49b4-9c62-7f4b4612992f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "1ef59058-fc5d-4b21-a595-2e8764525835",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d4e9636d-bb3c-4e95-80a8-ce82ec2a6751"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d4e9636d-bb3c-4e95-80a8-ce82ec2a6751",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d4e9636d-bb3c-4e95-80a8-ce82ec2a6751",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "411caae0-e161-4b85-93af-bc1596f5cc97",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "d7629de6-eb7b-4fe9-839b-70d0f4012984",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "ec010262-a5e0-42e2-8096-85e914f96470"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "ec010262-a5e0-42e2-8096-85e914f96470",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:12:46.453 16:30:17 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:46.453 16:30:17 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:12:46.453 16:30:17 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:12:46.453 16:30:17 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:46.453 16:30:17 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:46.453 16:30:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:46.453 16:30:17 -- common/autotest_common.sh@10 -- # set +x 00:12:46.453 ************************************ 00:12:46.453 START TEST bdev_fio_trim 00:12:46.453 ************************************ 00:12:46.453 16:30:17 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:46.453 16:30:17 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:46.453 16:30:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:46.453 16:30:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:46.453 16:30:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:46.453 16:30:17 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:46.453 16:30:17 -- common/autotest_common.sh@1320 -- # shift 00:12:46.453 16:30:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:46.453 16:30:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:46.453 16:30:17 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:46.453 16:30:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:46.453 16:30:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:46.453 16:30:17 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:46.453 16:30:17 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:46.453 16:30:17 -- common/autotest_common.sh@1326 -- # break 00:12:46.453 16:30:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:46.453 16:30:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:46.453 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:46.453 fio-3.35 00:12:46.453 Starting 14 threads 00:12:58.655 00:12:58.655 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=121145: Sat Jul 13 16:30:28 2024 00:12:58.655 write: IOPS=135k, BW=526MiB/s (552MB/s)(5266MiB/10004msec); 0 zone resets 00:12:58.655 slat (nsec): min=1972, max=40068k, avg=35752.11, stdev=399600.27 00:12:58.655 clat (usec): min=21, max=40271, avg=266.23, stdev=1084.05 00:12:58.655 lat (usec): min=32, max=40298, avg=301.99, stdev=1154.66 00:12:58.655 clat percentiles (usec): 00:12:58.655 | 50.000th=[ 178], 99.000th=[ 478], 99.900th=[16188], 99.990th=[20317], 00:12:58.655 | 99.999th=[28181] 00:12:58.655 bw ( KiB/s): min=380999, max=791296, per=99.99%, avg=538947.16, stdev=9984.28, samples=266 00:12:58.656 iops : min=95257, max=197824, avg=134737.05, stdev=2496.03, samples=266 00:12:58.656 trim: IOPS=135k, BW=526MiB/s (552MB/s)(5266MiB/10004msec); 0 zone resets 00:12:58.656 slat (usec): min=4, max=40024, avg=26.20, stdev=346.26 00:12:58.656 clat (usec): min=3, max=40298, avg=284.82, stdev=1127.04 00:12:58.656 lat (usec): min=12, max=40308, avg=311.02, stdev=1178.69 00:12:58.656 clat percentiles (usec): 00:12:58.656 | 50.000th=[ 196], 99.000th=[ 412], 99.900th=[16319], 99.990th=[20317], 00:12:58.656 | 99.999th=[32113] 00:12:58.656 bw ( KiB/s): min=380991, max=791232, per=99.99%, avg=538951.79, stdev=9984.22, samples=266 00:12:58.656 iops : min=95249, max=197808, avg=134737.89, stdev=2496.05, samples=266 00:12:58.656 lat (usec) : 4=0.01%, 10=0.12%, 20=0.38%, 50=1.58%, 100=8.10% 00:12:58.656 lat (usec) : 250=65.35%, 500=23.76%, 750=0.14%, 1000=0.01% 00:12:58.656 lat (msec) : 2=0.01%, 4=0.01%, 10=0.03%, 20=0.49%, 50=0.02% 00:12:58.656 cpu : usr=69.24%, sys=0.43%, ctx=172859, majf=0, minf=9077 00:12:58.656 IO depths : 1=12.2%, 2=24.5%, 4=50.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.656 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.656 issued rwts: total=0,1348048,1348053,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.656 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:58.656 00:12:58.656 Run status group 0 (all jobs): 00:12:58.656 WRITE: bw=526MiB/s (552MB/s), 526MiB/s-526MiB/s (552MB/s-552MB/s), io=5266MiB (5522MB), run=10004-10004msec 00:12:58.656 TRIM: bw=526MiB/s (552MB/s), 526MiB/s-526MiB/s (552MB/s-552MB/s), io=5266MiB (5522MB), run=10004-10004msec 00:12:58.656 ----------------------------------------------------- 00:12:58.656 Suppressions used: 00:12:58.656 count bytes template 00:12:58.656 14 129 /usr/src/fio/parse.c 00:12:58.656 1 904 libcrypto.so 00:12:58.656 ----------------------------------------------------- 00:12:58.656 00:12:58.656 00:12:58.656 real 0m11.869s 00:12:58.656 user 1m39.499s 00:12:58.656 sys 0m1.562s 00:12:58.656 16:30:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.656 16:30:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.656 ************************************ 00:12:58.656 END TEST bdev_fio_trim 00:12:58.656 ************************************ 00:12:58.656 16:30:29 -- bdev/blockdev.sh@366 -- # rm -f 00:12:58.656 16:30:29 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:58.656 16:30:29 -- bdev/blockdev.sh@368 -- # popd 00:12:58.656 /home/vagrant/spdk_repo/spdk 00:12:58.656 16:30:29 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:12:58.656 00:12:58.656 real 0m24.356s 00:12:58.656 user 3m11.678s 00:12:58.656 sys 0m6.572s 00:12:58.656 16:30:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.656 16:30:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.656 ************************************ 00:12:58.656 END TEST bdev_fio 00:12:58.656 ************************************ 00:12:58.656 16:30:29 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:58.656 16:30:29 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:58.656 16:30:29 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:58.656 16:30:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:58.656 16:30:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.656 ************************************ 00:12:58.656 START TEST bdev_verify 00:12:58.656 ************************************ 00:12:58.656 16:30:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:58.656 [2024-07-13 16:30:29.715017] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:58.656 [2024-07-13 16:30:29.715300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121320 ] 00:12:58.656 [2024-07-13 16:30:29.873387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:58.656 [2024-07-13 16:30:29.948145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.656 [2024-07-13 16:30:29.948145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.914 [2024-07-13 16:30:30.130304] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:58.914 [2024-07-13 16:30:30.130442] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:58.914 [2024-07-13 16:30:30.138227] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:58.914 [2024-07-13 16:30:30.138304] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:58.914 [2024-07-13 16:30:30.146338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:58.914 [2024-07-13 16:30:30.146438] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:58.914 [2024-07-13 16:30:30.146538] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:58.914 [2024-07-13 16:30:30.261779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:58.914 [2024-07-13 16:30:30.261884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.914 [2024-07-13 16:30:30.261963] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:58.914 [2024-07-13 16:30:30.262008] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.914 [2024-07-13 16:30:30.265036] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.914 [2024-07-13 16:30:30.265086] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:59.172 Running I/O for 5 seconds... 00:13:04.483 00:13:04.483 Latency(us) 00:13:04.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.483 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x1000 00:13:04.483 Malloc0 : 5.16 1704.13 6.66 0.00 0.00 74545.03 1942.67 142806.06 00:13:04.483 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x1000 length 0x1000 00:13:04.483 Malloc0 : 5.16 1678.40 6.56 0.00 0.00 75643.61 1888.06 202724.69 00:13:04.483 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x800 00:13:04.483 Malloc1p0 : 5.17 1165.74 4.55 0.00 0.00 108878.98 3604.48 131820.98 00:13:04.483 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x800 length 0x800 00:13:04.483 Malloc1p0 : 5.17 1165.76 4.55 0.00 0.00 108869.21 3620.08 127826.41 00:13:04.483 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x800 00:13:04.483 Malloc1p1 : 5.17 1165.03 4.55 0.00 0.00 108758.72 3620.08 128825.05 00:13:04.483 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x800 length 0x800 00:13:04.483 Malloc1p1 : 5.17 1165.05 4.55 0.00 0.00 108769.95 3635.69 124331.15 00:13:04.483 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x200 00:13:04.483 Malloc2p0 : 5.17 1164.35 4.55 0.00 0.00 108666.14 3526.46 124830.48 00:13:04.483 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x200 length 0x200 00:13:04.483 Malloc2p0 : 5.17 1164.36 4.55 0.00 0.00 108644.92 3557.67 120835.90 00:13:04.483 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x200 00:13:04.483 Malloc2p1 : 5.18 1163.66 4.55 0.00 0.00 108554.97 3370.42 121834.54 00:13:04.483 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x200 length 0x200 00:13:04.483 Malloc2p1 : 5.18 1163.66 4.55 0.00 0.00 108559.51 3323.61 117340.65 00:13:04.483 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x200 00:13:04.483 Malloc2p2 : 5.18 1162.97 4.54 0.00 0.00 108425.88 3557.67 117839.97 00:13:04.483 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x200 length 0x200 00:13:04.483 Malloc2p2 : 5.18 1162.97 4.54 0.00 0.00 108437.35 3542.06 113845.39 00:13:04.483 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x200 00:13:04.483 Malloc2p3 : 5.18 1162.26 4.54 0.00 0.00 108329.75 3542.06 114844.04 00:13:04.483 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x200 length 0x200 00:13:04.483 Malloc2p3 : 5.18 1162.25 4.54 0.00 0.00 108337.75 3604.48 110350.14 00:13:04.483 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x200 00:13:04.483 Malloc2p4 : 5.18 1161.55 4.54 0.00 0.00 108241.95 3526.46 111348.78 00:13:04.483 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x200 length 0x200 00:13:04.483 Malloc2p4 : 5.18 1161.54 4.54 0.00 0.00 108224.53 3542.06 107354.21 00:13:04.483 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x200 00:13:04.483 Malloc2p5 : 5.19 1160.87 4.53 0.00 0.00 108103.03 3635.69 107853.53 00:13:04.483 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x200 length 0x200 00:13:04.483 Malloc2p5 : 5.19 1160.86 4.53 0.00 0.00 108106.30 3666.90 103858.96 00:13:04.483 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x200 00:13:04.483 Malloc2p6 : 5.19 1160.19 4.53 0.00 0.00 108023.67 3510.86 104358.28 00:13:04.483 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x200 length 0x200 00:13:04.483 Malloc2p6 : 5.19 1160.18 4.53 0.00 0.00 108015.31 3448.44 100363.70 00:13:04.483 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x200 00:13:04.483 Malloc2p7 : 5.19 1159.50 4.53 0.00 0.00 107899.32 3557.67 100863.02 00:13:04.483 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x200 length 0x200 00:13:04.483 Malloc2p7 : 5.19 1159.49 4.53 0.00 0.00 107892.77 3510.86 96868.45 00:13:04.483 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x1000 00:13:04.483 TestPT : 5.20 1144.61 4.47 0.00 0.00 109012.89 8550.89 100363.70 00:13:04.483 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x1000 length 0x1000 00:13:04.483 TestPT : 5.20 1128.63 4.41 0.00 0.00 110544.29 7084.13 160781.65 00:13:04.483 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x2000 00:13:04.483 raid0 : 5.21 1171.79 4.58 0.00 0.00 107038.65 3557.67 89378.62 00:13:04.483 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x2000 length 0x2000 00:13:04.483 raid0 : 5.21 1171.80 4.58 0.00 0.00 107066.21 3448.44 85384.05 00:13:04.483 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x2000 00:13:04.483 concat0 : 5.21 1171.10 4.57 0.00 0.00 106929.37 3604.48 85384.05 00:13:04.483 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x2000 length 0x2000 00:13:04.483 concat0 : 5.21 1171.10 4.57 0.00 0.00 106977.81 3666.90 85384.05 00:13:04.483 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x1000 00:13:04.483 raid1 : 5.21 1170.41 4.57 0.00 0.00 106821.98 3776.12 83886.08 00:13:04.483 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x1000 length 0x1000 00:13:04.483 raid1 : 5.21 1170.41 4.57 0.00 0.00 106836.28 3947.76 83886.08 00:13:04.483 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:04.483 Verification LBA range: start 0x0 length 0x4e2 00:13:04.483 AIO0 : 5.22 1169.23 4.57 0.00 0.00 106637.75 7989.15 83886.08 00:13:04.484 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:04.484 Verification LBA range: start 0x4e2 length 0x4e2 00:13:04.484 AIO0 : 5.22 1169.62 4.57 0.00 0.00 106629.86 8987.79 83386.76 00:13:04.484 =================================================================================================================== 00:13:04.484 Total : 38273.48 149.51 0.00 0.00 105166.95 1888.06 202724.69 00:13:05.052 00:13:05.052 real 0m6.867s 00:13:05.052 user 0m11.215s 00:13:05.052 sys 0m0.664s 00:13:05.052 16:30:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.052 16:30:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.052 ************************************ 00:13:05.052 END TEST bdev_verify 00:13:05.052 ************************************ 00:13:05.312 16:30:36 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:05.312 16:30:36 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:05.312 16:30:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:05.312 16:30:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.312 ************************************ 00:13:05.312 START TEST bdev_verify_big_io 00:13:05.312 ************************************ 00:13:05.312 16:30:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:05.312 [2024-07-13 16:30:36.627151] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:05.312 [2024-07-13 16:30:36.627444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121428 ] 00:13:05.572 [2024-07-13 16:30:36.786151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:05.572 [2024-07-13 16:30:36.870586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.572 [2024-07-13 16:30:36.870587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.831 [2024-07-13 16:30:37.052009] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:05.831 [2024-07-13 16:30:37.052112] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:05.831 [2024-07-13 16:30:37.059915] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:05.831 [2024-07-13 16:30:37.060004] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:05.831 [2024-07-13 16:30:37.067988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:05.831 [2024-07-13 16:30:37.068039] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:05.831 [2024-07-13 16:30:37.068098] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:05.831 [2024-07-13 16:30:37.180308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:05.831 [2024-07-13 16:30:37.180453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.831 [2024-07-13 16:30:37.180509] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:05.831 [2024-07-13 16:30:37.180546] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.831 [2024-07-13 16:30:37.183751] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.831 [2024-07-13 16:30:37.183808] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:06.090 [2024-07-13 16:30:37.391847] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.393168] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.395201] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.397188] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.398413] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.400434] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.401690] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.403671] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.404949] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.406959] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.408185] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.410198] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.411433] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.413442] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.415492] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.416746] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:06.090 [2024-07-13 16:30:37.453014] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:06.090 [2024-07-13 16:30:37.456087] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:06.090 Running I/O for 5 seconds... 00:13:12.658 00:13:12.658 Latency(us) 00:13:12.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.658 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x0 length 0x100 00:13:12.658 Malloc0 : 5.56 384.40 24.03 0.00 0.00 324359.91 18350.08 1094513.62 00:13:12.658 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x100 length 0x100 00:13:12.658 Malloc0 : 5.58 357.00 22.31 0.00 0.00 351183.18 19598.38 1310220.68 00:13:12.658 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x0 length 0x80 00:13:12.658 Malloc1p0 : 5.61 223.52 13.97 0.00 0.00 550904.17 44938.97 982665.51 00:13:12.658 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x80 length 0x80 00:13:12.658 Malloc1p0 : 5.58 289.48 18.09 0.00 0.00 428375.99 45937.62 894784.85 00:13:12.658 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x0 length 0x80 00:13:12.658 Malloc1p1 : 5.76 131.52 8.22 0.00 0.00 910669.61 40694.74 1901417.81 00:13:12.658 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x80 length 0x80 00:13:12.658 Malloc1p1 : 5.79 130.93 8.18 0.00 0.00 914743.24 41443.72 1989298.47 00:13:12.658 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x0 length 0x20 00:13:12.658 Malloc2p0 : 5.57 73.27 4.58 0.00 0.00 407214.87 6803.26 615164.59 00:13:12.658 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x20 length 0x20 00:13:12.658 Malloc2p0 : 5.58 73.11 4.57 0.00 0.00 410593.90 6553.60 619159.16 00:13:12.658 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x0 length 0x20 00:13:12.658 Malloc2p1 : 5.61 76.44 4.78 0.00 0.00 393397.54 6553.60 599186.29 00:13:12.658 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x20 length 0x20 00:13:12.658 Malloc2p1 : 5.58 73.09 4.57 0.00 0.00 408990.23 6491.18 607175.44 00:13:12.658 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x0 length 0x20 00:13:12.658 Malloc2p2 : 5.61 76.43 4.78 0.00 0.00 392019.31 6990.51 587202.56 00:13:12.658 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x20 length 0x20 00:13:12.658 Malloc2p2 : 5.58 73.08 4.57 0.00 0.00 407333.24 6740.85 595191.71 00:13:12.658 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x0 length 0x20 00:13:12.658 Malloc2p3 : 5.61 76.41 4.78 0.00 0.00 390449.22 6335.15 575218.83 00:13:12.658 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x20 length 0x20 00:13:12.658 Malloc2p3 : 5.58 73.06 4.57 0.00 0.00 405693.88 6147.90 583207.98 00:13:12.658 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x0 length 0x20 00:13:12.658 Malloc2p4 : 5.62 76.40 4.77 0.00 0.00 388949.54 5804.62 563235.11 00:13:12.658 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x20 length 0x20 00:13:12.658 Malloc2p4 : 5.59 73.05 4.57 0.00 0.00 404211.33 5648.58 571224.26 00:13:12.658 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x0 length 0x20 00:13:12.658 Malloc2p5 : 5.62 76.38 4.77 0.00 0.00 387459.65 5960.66 551251.38 00:13:12.658 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x20 length 0x20 00:13:12.658 Malloc2p5 : 5.59 73.04 4.56 0.00 0.00 402838.96 5773.41 559240.53 00:13:12.658 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x0 length 0x20 00:13:12.658 Malloc2p6 : 5.62 76.37 4.77 0.00 0.00 386076.80 6241.52 539267.66 00:13:12.658 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x20 length 0x20 00:13:12.658 Malloc2p6 : 5.59 73.02 4.56 0.00 0.00 401339.70 6116.69 547256.81 00:13:12.658 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x0 length 0x20 00:13:12.658 Malloc2p7 : 5.62 76.36 4.77 0.00 0.00 384756.33 6054.28 527283.93 00:13:12.658 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x20 length 0x20 00:13:12.658 Malloc2p7 : 5.63 76.25 4.77 0.00 0.00 385443.27 6054.28 531278.51 00:13:12.658 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x0 length 0x100 00:13:12.658 TestPT : 5.79 131.51 8.22 0.00 0.00 874247.78 52928.12 1893428.66 00:13:12.658 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x100 length 0x100 00:13:12.658 TestPT : 5.82 124.32 7.77 0.00 0.00 923411.72 71902.35 1949352.72 00:13:12.658 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x0 length 0x200 00:13:12.658 raid0 : 5.80 137.35 8.58 0.00 0.00 829425.75 38447.79 1877450.36 00:13:12.658 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:12.658 Verification LBA range: start 0x200 length 0x200 00:13:12.658 raid0 : 5.82 136.65 8.54 0.00 0.00 831319.06 38947.11 1957341.87 00:13:12.659 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:12.659 Verification LBA range: start 0x0 length 0x200 00:13:12.659 concat0 : 5.75 152.05 9.50 0.00 0.00 744378.70 36700.16 1877450.36 00:13:12.659 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:12.659 Verification LBA range: start 0x200 length 0x200 00:13:12.659 concat0 : 5.83 142.16 8.88 0.00 0.00 789918.44 20222.54 1949352.72 00:13:12.659 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:12.659 Verification LBA range: start 0x0 length 0x100 00:13:12.659 raid1 : 5.80 159.02 9.94 0.00 0.00 701932.06 18350.08 1869461.21 00:13:12.659 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:12.659 Verification LBA range: start 0x100 length 0x100 00:13:12.659 raid1 : 5.83 170.07 10.63 0.00 0.00 654326.09 20846.69 1941363.57 00:13:12.659 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:12.659 Verification LBA range: start 0x0 length 0x4e 00:13:12.659 AIO0 : 5.83 165.08 10.32 0.00 0.00 407177.23 862.11 1086524.46 00:13:12.659 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:12.659 Verification LBA range: start 0x4e length 0x4e 00:13:12.659 AIO0 : 5.83 160.87 10.05 0.00 0.00 417773.90 1240.50 1118481.07 00:13:12.659 =================================================================================================================== 00:13:12.659 Total : 4191.68 261.98 0.00 0.00 540200.82 862.11 1989298.47 00:13:12.659 00:13:12.659 real 0m7.521s 00:13:12.659 user 0m13.594s 00:13:12.659 sys 0m0.574s 00:13:12.659 16:30:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.659 ************************************ 00:13:12.659 END TEST bdev_verify_big_io 00:13:12.659 16:30:44 -- common/autotest_common.sh@10 -- # set +x 00:13:12.659 ************************************ 00:13:12.917 16:30:44 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:12.917 16:30:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:12.917 16:30:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:12.917 16:30:44 -- common/autotest_common.sh@10 -- # set +x 00:13:12.917 ************************************ 00:13:12.917 START TEST bdev_write_zeroes 00:13:12.917 ************************************ 00:13:12.917 16:30:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:12.917 [2024-07-13 16:30:44.246898] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:12.917 [2024-07-13 16:30:44.247291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121547 ] 00:13:13.176 [2024-07-13 16:30:44.417337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.176 [2024-07-13 16:30:44.501007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.434 [2024-07-13 16:30:44.680512] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:13.434 [2024-07-13 16:30:44.680627] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:13.435 [2024-07-13 16:30:44.688446] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:13.435 [2024-07-13 16:30:44.688536] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:13.435 [2024-07-13 16:30:44.696511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:13.435 [2024-07-13 16:30:44.696576] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:13.435 [2024-07-13 16:30:44.696613] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:13.435 [2024-07-13 16:30:44.811580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:13.435 [2024-07-13 16:30:44.811707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.435 [2024-07-13 16:30:44.811780] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:13.435 [2024-07-13 16:30:44.811822] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.435 [2024-07-13 16:30:44.814834] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.435 [2024-07-13 16:30:44.814924] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:13.693 Running I/O for 1 seconds... 00:13:15.070 00:13:15.070 Latency(us) 00:13:15.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.070 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 Malloc0 : 1.03 6089.06 23.79 0.00 0.00 21009.76 674.86 37199.48 00:13:15.070 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 Malloc1p0 : 1.03 6082.10 23.76 0.00 0.00 20993.59 873.81 36450.50 00:13:15.070 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 Malloc1p1 : 1.03 6075.57 23.73 0.00 0.00 20976.20 928.43 35451.86 00:13:15.070 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 Malloc2p0 : 1.03 6069.27 23.71 0.00 0.00 20956.20 869.91 34702.87 00:13:15.070 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 Malloc2p1 : 1.03 6063.05 23.68 0.00 0.00 20941.24 901.12 33953.89 00:13:15.070 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 Malloc2p2 : 1.04 6056.70 23.66 0.00 0.00 20920.14 850.41 33204.91 00:13:15.070 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 Malloc2p3 : 1.04 6050.26 23.63 0.00 0.00 20903.24 893.32 32206.26 00:13:15.070 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 Malloc2p4 : 1.04 6044.07 23.61 0.00 0.00 20886.44 862.11 31457.28 00:13:15.070 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 Malloc2p5 : 1.04 6037.83 23.59 0.00 0.00 20863.86 889.42 30583.47 00:13:15.070 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 Malloc2p6 : 1.04 6031.64 23.56 0.00 0.00 20846.57 838.70 29834.48 00:13:15.070 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 Malloc2p7 : 1.05 6088.29 23.78 0.00 0.00 20614.62 905.02 28960.67 00:13:15.070 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 TestPT : 1.05 6081.80 23.76 0.00 0.00 20596.23 901.12 28086.86 00:13:15.070 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 raid0 : 1.05 6074.70 23.73 0.00 0.00 20571.20 1326.32 26838.55 00:13:15.070 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 concat0 : 1.05 6067.74 23.70 0.00 0.00 20538.77 1279.51 25590.25 00:13:15.070 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 raid1 : 1.06 6058.84 23.67 0.00 0.00 20496.05 2028.50 23592.96 00:13:15.070 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.070 AIO0 : 1.06 6030.74 23.56 0.00 0.00 20507.96 1412.14 22469.49 00:13:15.070 =================================================================================================================== 00:13:15.070 Total : 97001.68 378.91 0.00 0.00 20787.16 674.86 37199.48 00:13:15.328 00:13:15.328 real 0m2.630s 00:13:15.328 user 0m1.944s 00:13:15.328 sys 0m0.499s 00:13:15.328 16:30:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.328 16:30:46 -- common/autotest_common.sh@10 -- # set +x 00:13:15.328 ************************************ 00:13:15.328 END TEST bdev_write_zeroes 00:13:15.328 ************************************ 00:13:15.586 16:30:46 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:15.586 16:30:46 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:15.586 16:30:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.586 16:30:46 -- common/autotest_common.sh@10 -- # set +x 00:13:15.586 ************************************ 00:13:15.586 START TEST bdev_json_nonenclosed 00:13:15.586 ************************************ 00:13:15.586 16:30:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:15.586 [2024-07-13 16:30:46.911789] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:15.586 [2024-07-13 16:30:46.911991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121598 ] 00:13:15.586 [2024-07-13 16:30:47.056210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.845 [2024-07-13 16:30:47.137688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.845 [2024-07-13 16:30:47.137995] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:15.845 [2024-07-13 16:30:47.138050] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:16.104 00:13:16.104 real 0m0.484s 00:13:16.104 user 0m0.238s 00:13:16.104 sys 0m0.146s 00:13:16.104 16:30:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.104 16:30:47 -- common/autotest_common.sh@10 -- # set +x 00:13:16.104 ************************************ 00:13:16.104 END TEST bdev_json_nonenclosed 00:13:16.104 ************************************ 00:13:16.104 16:30:47 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:16.104 16:30:47 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:16.104 16:30:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:16.104 16:30:47 -- common/autotest_common.sh@10 -- # set +x 00:13:16.104 ************************************ 00:13:16.104 START TEST bdev_json_nonarray 00:13:16.104 ************************************ 00:13:16.104 16:30:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:16.104 [2024-07-13 16:30:47.471978] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:16.104 [2024-07-13 16:30:47.472478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121636 ] 00:13:16.363 [2024-07-13 16:30:47.628901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.363 [2024-07-13 16:30:47.703875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.363 [2024-07-13 16:30:47.704149] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:16.363 [2024-07-13 16:30:47.704195] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:16.621 00:13:16.621 real 0m0.499s 00:13:16.621 user 0m0.259s 00:13:16.621 sys 0m0.140s 00:13:16.621 16:30:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.621 ************************************ 00:13:16.621 END TEST bdev_json_nonarray 00:13:16.621 ************************************ 00:13:16.621 16:30:47 -- common/autotest_common.sh@10 -- # set +x 00:13:16.621 16:30:47 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:13:16.622 16:30:47 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:13:16.622 16:30:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:16.622 16:30:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:16.622 16:30:47 -- common/autotest_common.sh@10 -- # set +x 00:13:16.622 ************************************ 00:13:16.622 START TEST bdev_qos 00:13:16.622 ************************************ 00:13:16.622 16:30:47 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:13:16.622 16:30:47 -- bdev/blockdev.sh@444 -- # QOS_PID=121658 00:13:16.622 Process qos testing pid: 121658 00:13:16.622 16:30:47 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 121658' 00:13:16.622 16:30:47 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:16.622 16:30:47 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:16.622 16:30:47 -- bdev/blockdev.sh@447 -- # waitforlisten 121658 00:13:16.622 16:30:47 -- common/autotest_common.sh@819 -- # '[' -z 121658 ']' 00:13:16.622 16:30:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.622 16:30:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:16.622 16:30:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.622 16:30:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:16.622 16:30:47 -- common/autotest_common.sh@10 -- # set +x 00:13:16.622 [2024-07-13 16:30:48.021203] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:16.622 [2024-07-13 16:30:48.021401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121658 ] 00:13:16.881 [2024-07-13 16:30:48.163556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.881 [2024-07-13 16:30:48.261970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.815 16:30:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:17.815 16:30:48 -- common/autotest_common.sh@852 -- # return 0 00:13:17.815 16:30:48 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:17.815 16:30:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.815 16:30:48 -- common/autotest_common.sh@10 -- # set +x 00:13:17.815 Malloc_0 00:13:17.815 16:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.815 16:30:49 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:13:17.815 16:30:49 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:13:17.815 16:30:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:17.815 16:30:49 -- common/autotest_common.sh@889 -- # local i 00:13:17.815 16:30:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:17.815 16:30:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:17.815 16:30:49 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:17.815 16:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.815 16:30:49 -- common/autotest_common.sh@10 -- # set +x 00:13:17.815 16:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.815 16:30:49 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:17.815 16:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.815 16:30:49 -- common/autotest_common.sh@10 -- # set +x 00:13:17.815 [ 00:13:17.815 { 00:13:17.815 "name": "Malloc_0", 00:13:17.815 "aliases": [ 00:13:17.815 "718ffc1a-74d5-40e1-b9ab-93737359003a" 00:13:17.815 ], 00:13:17.815 "product_name": "Malloc disk", 00:13:17.815 "block_size": 512, 00:13:17.815 "num_blocks": 262144, 00:13:17.815 "uuid": "718ffc1a-74d5-40e1-b9ab-93737359003a", 00:13:17.815 "assigned_rate_limits": { 00:13:17.815 "rw_ios_per_sec": 0, 00:13:17.815 "rw_mbytes_per_sec": 0, 00:13:17.815 "r_mbytes_per_sec": 0, 00:13:17.815 "w_mbytes_per_sec": 0 00:13:17.815 }, 00:13:17.815 "claimed": false, 00:13:17.815 "zoned": false, 00:13:17.815 "supported_io_types": { 00:13:17.815 "read": true, 00:13:17.815 "write": true, 00:13:17.815 "unmap": true, 00:13:17.815 "write_zeroes": true, 00:13:17.815 "flush": true, 00:13:17.815 "reset": true, 00:13:17.815 "compare": false, 00:13:17.815 "compare_and_write": false, 00:13:17.815 "abort": true, 00:13:17.815 "nvme_admin": false, 00:13:17.815 "nvme_io": false 00:13:17.815 }, 00:13:17.815 "memory_domains": [ 00:13:17.815 { 00:13:17.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.815 "dma_device_type": 2 00:13:17.815 } 00:13:17.815 ], 00:13:17.815 "driver_specific": {} 00:13:17.815 } 00:13:17.815 ] 00:13:17.815 16:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.815 16:30:49 -- common/autotest_common.sh@895 -- # return 0 00:13:17.815 16:30:49 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:17.815 16:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.815 16:30:49 -- common/autotest_common.sh@10 -- # set +x 00:13:17.815 Null_1 00:13:17.815 16:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.815 16:30:49 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:13:17.815 16:30:49 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:13:17.815 16:30:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:17.815 16:30:49 -- common/autotest_common.sh@889 -- # local i 00:13:17.815 16:30:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:17.815 16:30:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:17.815 16:30:49 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:17.815 16:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.815 16:30:49 -- common/autotest_common.sh@10 -- # set +x 00:13:17.815 16:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.815 16:30:49 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:17.815 16:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.815 16:30:49 -- common/autotest_common.sh@10 -- # set +x 00:13:17.815 [ 00:13:17.815 { 00:13:17.815 "name": "Null_1", 00:13:17.815 "aliases": [ 00:13:17.815 "61c9c5ac-2572-4e91-a42c-a21fec2a735b" 00:13:17.815 ], 00:13:17.815 "product_name": "Null disk", 00:13:17.815 "block_size": 512, 00:13:17.815 "num_blocks": 262144, 00:13:17.815 "uuid": "61c9c5ac-2572-4e91-a42c-a21fec2a735b", 00:13:17.815 "assigned_rate_limits": { 00:13:17.815 "rw_ios_per_sec": 0, 00:13:17.815 "rw_mbytes_per_sec": 0, 00:13:17.815 "r_mbytes_per_sec": 0, 00:13:17.815 "w_mbytes_per_sec": 0 00:13:17.815 }, 00:13:17.815 "claimed": false, 00:13:17.815 "zoned": false, 00:13:17.815 "supported_io_types": { 00:13:17.815 "read": true, 00:13:17.815 "write": true, 00:13:17.815 "unmap": false, 00:13:17.815 "write_zeroes": true, 00:13:17.815 "flush": false, 00:13:17.815 "reset": true, 00:13:17.815 "compare": false, 00:13:17.815 "compare_and_write": false, 00:13:17.815 "abort": true, 00:13:17.815 "nvme_admin": false, 00:13:17.815 "nvme_io": false 00:13:17.815 }, 00:13:17.815 "driver_specific": {} 00:13:17.815 } 00:13:17.815 ] 00:13:17.815 16:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.815 16:30:49 -- common/autotest_common.sh@895 -- # return 0 00:13:17.815 16:30:49 -- bdev/blockdev.sh@455 -- # qos_function_test 00:13:17.815 16:30:49 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:13:17.815 16:30:49 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:13:17.815 16:30:49 -- bdev/blockdev.sh@410 -- # local io_result=0 00:13:17.815 16:30:49 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:13:17.815 16:30:49 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:13:17.815 16:30:49 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:17.815 16:30:49 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:13:17.815 16:30:49 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:17.815 16:30:49 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:17.815 16:30:49 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:17.815 16:30:49 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:17.815 16:30:49 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:17.815 16:30:49 -- bdev/blockdev.sh@376 -- # tail -1 00:13:17.815 Running I/O for 60 seconds... 00:13:23.081 16:30:54 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 86343.52 345374.08 0.00 0.00 350208.00 0.00 0.00 ' 00:13:23.081 16:30:54 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:23.081 16:30:54 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:23.081 16:30:54 -- bdev/blockdev.sh@378 -- # iostat_result=86343.52 00:13:23.081 16:30:54 -- bdev/blockdev.sh@383 -- # echo 86343 00:13:23.081 16:30:54 -- bdev/blockdev.sh@414 -- # io_result=86343 00:13:23.081 16:30:54 -- bdev/blockdev.sh@416 -- # iops_limit=21000 00:13:23.081 16:30:54 -- bdev/blockdev.sh@417 -- # '[' 21000 -gt 1000 ']' 00:13:23.081 16:30:54 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 21000 Malloc_0 00:13:23.081 16:30:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.081 16:30:54 -- common/autotest_common.sh@10 -- # set +x 00:13:23.081 16:30:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.081 16:30:54 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 21000 IOPS Malloc_0 00:13:23.081 16:30:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:23.081 16:30:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:23.081 16:30:54 -- common/autotest_common.sh@10 -- # set +x 00:13:23.081 ************************************ 00:13:23.081 START TEST bdev_qos_iops 00:13:23.081 ************************************ 00:13:23.081 16:30:54 -- common/autotest_common.sh@1104 -- # run_qos_test 21000 IOPS Malloc_0 00:13:23.081 16:30:54 -- bdev/blockdev.sh@387 -- # local qos_limit=21000 00:13:23.081 16:30:54 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:23.081 16:30:54 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:13:23.081 16:30:54 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:23.081 16:30:54 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:23.081 16:30:54 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:23.081 16:30:54 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:23.081 16:30:54 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:23.081 16:30:54 -- bdev/blockdev.sh@376 -- # tail -1 00:13:28.439 16:30:59 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 20999.59 83998.36 0.00 0.00 85428.00 0.00 0.00 ' 00:13:28.439 16:30:59 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:28.439 16:30:59 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:28.439 16:30:59 -- bdev/blockdev.sh@378 -- # iostat_result=20999.59 00:13:28.439 16:30:59 -- bdev/blockdev.sh@383 -- # echo 20999 00:13:28.439 ************************************ 00:13:28.439 END TEST bdev_qos_iops 00:13:28.439 ************************************ 00:13:28.439 16:30:59 -- bdev/blockdev.sh@390 -- # qos_result=20999 00:13:28.439 16:30:59 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:13:28.439 16:30:59 -- bdev/blockdev.sh@394 -- # lower_limit=18900 00:13:28.439 16:30:59 -- bdev/blockdev.sh@395 -- # upper_limit=23100 00:13:28.439 16:30:59 -- bdev/blockdev.sh@398 -- # '[' 20999 -lt 18900 ']' 00:13:28.439 16:30:59 -- bdev/blockdev.sh@398 -- # '[' 20999 -gt 23100 ']' 00:13:28.439 00:13:28.439 real 0m5.215s 00:13:28.439 user 0m0.113s 00:13:28.439 sys 0m0.038s 00:13:28.439 16:30:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.439 16:30:59 -- common/autotest_common.sh@10 -- # set +x 00:13:28.439 16:30:59 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:13:28.439 16:30:59 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:28.439 16:30:59 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:28.439 16:30:59 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:28.439 16:30:59 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:28.439 16:30:59 -- bdev/blockdev.sh@376 -- # tail -1 00:13:28.439 16:30:59 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:33.701 16:31:04 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 29991.43 119965.72 0.00 0.00 121856.00 0.00 0.00 ' 00:13:33.701 16:31:04 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:33.701 16:31:04 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:33.701 16:31:04 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:33.701 16:31:04 -- bdev/blockdev.sh@380 -- # iostat_result=121856.00 00:13:33.701 16:31:04 -- bdev/blockdev.sh@383 -- # echo 121856 00:13:33.701 16:31:04 -- bdev/blockdev.sh@425 -- # bw_limit=121856 00:13:33.701 16:31:04 -- bdev/blockdev.sh@426 -- # bw_limit=11 00:13:33.701 16:31:04 -- bdev/blockdev.sh@427 -- # '[' 11 -lt 2 ']' 00:13:33.701 16:31:04 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 11 Null_1 00:13:33.701 16:31:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.701 16:31:04 -- common/autotest_common.sh@10 -- # set +x 00:13:33.701 16:31:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.701 16:31:04 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 11 BANDWIDTH Null_1 00:13:33.701 16:31:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:33.701 16:31:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:33.701 16:31:04 -- common/autotest_common.sh@10 -- # set +x 00:13:33.701 ************************************ 00:13:33.701 START TEST bdev_qos_bw 00:13:33.701 ************************************ 00:13:33.701 16:31:04 -- common/autotest_common.sh@1104 -- # run_qos_test 11 BANDWIDTH Null_1 00:13:33.701 16:31:04 -- bdev/blockdev.sh@387 -- # local qos_limit=11 00:13:33.701 16:31:04 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:33.701 16:31:04 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:13:33.701 16:31:04 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:33.701 16:31:04 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:33.701 16:31:04 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:33.701 16:31:04 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:33.701 16:31:04 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:33.701 16:31:04 -- bdev/blockdev.sh@376 -- # tail -1 00:13:38.984 16:31:10 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2815.77 11263.07 0.00 0.00 11500.00 0.00 0.00 ' 00:13:38.984 16:31:10 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:38.984 16:31:10 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:38.984 16:31:10 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:38.984 16:31:10 -- bdev/blockdev.sh@380 -- # iostat_result=11500.00 00:13:38.984 16:31:10 -- bdev/blockdev.sh@383 -- # echo 11500 00:13:38.984 16:31:10 -- bdev/blockdev.sh@390 -- # qos_result=11500 00:13:38.984 16:31:10 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:38.984 16:31:10 -- bdev/blockdev.sh@392 -- # qos_limit=11264 00:13:38.984 16:31:10 -- bdev/blockdev.sh@394 -- # lower_limit=10137 00:13:38.984 16:31:10 -- bdev/blockdev.sh@395 -- # upper_limit=12390 00:13:38.984 16:31:10 -- bdev/blockdev.sh@398 -- # '[' 11500 -lt 10137 ']' 00:13:38.984 16:31:10 -- bdev/blockdev.sh@398 -- # '[' 11500 -gt 12390 ']' 00:13:38.984 00:13:38.984 real 0m5.241s 00:13:38.984 user 0m0.126s 00:13:38.984 sys 0m0.032s 00:13:38.984 16:31:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.984 ************************************ 00:13:38.984 END TEST bdev_qos_bw 00:13:38.984 ************************************ 00:13:38.984 16:31:10 -- common/autotest_common.sh@10 -- # set +x 00:13:38.984 16:31:10 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:38.984 16:31:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.984 16:31:10 -- common/autotest_common.sh@10 -- # set +x 00:13:38.984 16:31:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.984 16:31:10 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:38.984 16:31:10 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:38.984 16:31:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.984 16:31:10 -- common/autotest_common.sh@10 -- # set +x 00:13:38.984 ************************************ 00:13:38.984 START TEST bdev_qos_ro_bw 00:13:38.984 ************************************ 00:13:38.984 16:31:10 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:38.984 16:31:10 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:13:38.984 16:31:10 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:38.984 16:31:10 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:13:38.984 16:31:10 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:38.984 16:31:10 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:38.984 16:31:10 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:38.984 16:31:10 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:38.984 16:31:10 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:38.984 16:31:10 -- bdev/blockdev.sh@376 -- # tail -1 00:13:44.284 16:31:15 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.96 2047.85 0.00 0.00 2068.00 0.00 0.00 ' 00:13:44.284 16:31:15 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:44.284 16:31:15 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:44.284 16:31:15 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:44.284 16:31:15 -- bdev/blockdev.sh@380 -- # iostat_result=2068.00 00:13:44.284 16:31:15 -- bdev/blockdev.sh@383 -- # echo 2068 00:13:44.284 ************************************ 00:13:44.284 END TEST bdev_qos_ro_bw 00:13:44.284 ************************************ 00:13:44.284 16:31:15 -- bdev/blockdev.sh@390 -- # qos_result=2068 00:13:44.284 16:31:15 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:44.284 16:31:15 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:13:44.284 16:31:15 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:13:44.284 16:31:15 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:13:44.284 16:31:15 -- bdev/blockdev.sh@398 -- # '[' 2068 -lt 1843 ']' 00:13:44.284 16:31:15 -- bdev/blockdev.sh@398 -- # '[' 2068 -gt 2252 ']' 00:13:44.284 00:13:44.284 real 0m5.183s 00:13:44.284 user 0m0.105s 00:13:44.284 sys 0m0.050s 00:13:44.284 16:31:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.284 16:31:15 -- common/autotest_common.sh@10 -- # set +x 00:13:44.284 16:31:15 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:44.284 16:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.284 16:31:15 -- common/autotest_common.sh@10 -- # set +x 00:13:44.574 16:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.574 16:31:16 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:13:44.574 16:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.574 16:31:16 -- common/autotest_common.sh@10 -- # set +x 00:13:44.833 00:13:44.833 Latency(us) 00:13:44.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.833 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:44.833 Malloc_0 : 26.84 29202.58 114.07 0.00 0.00 8683.41 2028.50 503316.48 00:13:44.833 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:44.833 Null_1 : 26.97 29493.03 115.21 0.00 0.00 8661.27 589.04 128825.05 00:13:44.833 =================================================================================================================== 00:13:44.833 Total : 58695.61 229.28 0.00 0.00 8672.26 589.04 503316.48 00:13:44.833 0 00:13:44.833 16:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.833 16:31:16 -- bdev/blockdev.sh@459 -- # killprocess 121658 00:13:44.833 16:31:16 -- common/autotest_common.sh@926 -- # '[' -z 121658 ']' 00:13:44.833 16:31:16 -- common/autotest_common.sh@930 -- # kill -0 121658 00:13:44.833 16:31:16 -- common/autotest_common.sh@931 -- # uname 00:13:44.833 16:31:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:44.833 16:31:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121658 00:13:44.833 killing process with pid 121658 00:13:44.833 Received shutdown signal, test time was about 27.008062 seconds 00:13:44.833 00:13:44.833 Latency(us) 00:13:44.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.833 =================================================================================================================== 00:13:44.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:44.833 16:31:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:44.833 16:31:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:44.834 16:31:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121658' 00:13:44.834 16:31:16 -- common/autotest_common.sh@945 -- # kill 121658 00:13:44.834 16:31:16 -- common/autotest_common.sh@950 -- # wait 121658 00:13:45.402 16:31:16 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:13:45.402 00:13:45.402 real 0m28.643s 00:13:45.402 user 0m29.391s 00:13:45.402 sys 0m0.807s 00:13:45.402 16:31:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.402 16:31:16 -- common/autotest_common.sh@10 -- # set +x 00:13:45.402 ************************************ 00:13:45.402 END TEST bdev_qos 00:13:45.402 ************************************ 00:13:45.402 16:31:16 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:45.402 16:31:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:45.402 16:31:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:45.402 16:31:16 -- common/autotest_common.sh@10 -- # set +x 00:13:45.402 ************************************ 00:13:45.402 START TEST bdev_qd_sampling 00:13:45.402 ************************************ 00:13:45.402 16:31:16 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:13:45.402 16:31:16 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:13:45.402 16:31:16 -- bdev/blockdev.sh@539 -- # QD_PID=122129 00:13:45.402 16:31:16 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 122129' 00:13:45.402 Process bdev QD sampling period testing pid: 122129 00:13:45.402 16:31:16 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:45.402 16:31:16 -- bdev/blockdev.sh@542 -- # waitforlisten 122129 00:13:45.402 16:31:16 -- common/autotest_common.sh@819 -- # '[' -z 122129 ']' 00:13:45.402 16:31:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.402 16:31:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:45.402 16:31:16 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:45.402 16:31:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.402 16:31:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:45.402 16:31:16 -- common/autotest_common.sh@10 -- # set +x 00:13:45.402 [2024-07-13 16:31:16.744611] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:45.402 [2024-07-13 16:31:16.745109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122129 ] 00:13:45.660 [2024-07-13 16:31:16.915821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:45.660 [2024-07-13 16:31:16.999292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.660 [2024-07-13 16:31:16.999302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.594 16:31:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:46.594 16:31:17 -- common/autotest_common.sh@852 -- # return 0 00:13:46.594 16:31:17 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:46.594 16:31:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.594 16:31:17 -- common/autotest_common.sh@10 -- # set +x 00:13:46.594 Malloc_QD 00:13:46.594 16:31:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.594 16:31:17 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:13:46.594 16:31:17 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:13:46.594 16:31:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:46.594 16:31:17 -- common/autotest_common.sh@889 -- # local i 00:13:46.594 16:31:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:46.594 16:31:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:46.594 16:31:17 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:46.594 16:31:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.594 16:31:17 -- common/autotest_common.sh@10 -- # set +x 00:13:46.594 16:31:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.594 16:31:17 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:46.594 16:31:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.594 16:31:17 -- common/autotest_common.sh@10 -- # set +x 00:13:46.594 [ 00:13:46.594 { 00:13:46.594 "name": "Malloc_QD", 00:13:46.594 "aliases": [ 00:13:46.594 "76ced1e7-68ab-4619-8040-30953b10b548" 00:13:46.594 ], 00:13:46.594 "product_name": "Malloc disk", 00:13:46.594 "block_size": 512, 00:13:46.594 "num_blocks": 262144, 00:13:46.594 "uuid": "76ced1e7-68ab-4619-8040-30953b10b548", 00:13:46.594 "assigned_rate_limits": { 00:13:46.594 "rw_ios_per_sec": 0, 00:13:46.594 "rw_mbytes_per_sec": 0, 00:13:46.594 "r_mbytes_per_sec": 0, 00:13:46.594 "w_mbytes_per_sec": 0 00:13:46.594 }, 00:13:46.594 "claimed": false, 00:13:46.594 "zoned": false, 00:13:46.594 "supported_io_types": { 00:13:46.594 "read": true, 00:13:46.594 "write": true, 00:13:46.594 "unmap": true, 00:13:46.594 "write_zeroes": true, 00:13:46.594 "flush": true, 00:13:46.594 "reset": true, 00:13:46.594 "compare": false, 00:13:46.594 "compare_and_write": false, 00:13:46.594 "abort": true, 00:13:46.594 "nvme_admin": false, 00:13:46.594 "nvme_io": false 00:13:46.594 }, 00:13:46.594 "memory_domains": [ 00:13:46.594 { 00:13:46.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.594 "dma_device_type": 2 00:13:46.594 } 00:13:46.594 ], 00:13:46.594 "driver_specific": {} 00:13:46.594 } 00:13:46.594 ] 00:13:46.594 16:31:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.594 16:31:17 -- common/autotest_common.sh@895 -- # return 0 00:13:46.594 16:31:17 -- bdev/blockdev.sh@548 -- # sleep 2 00:13:46.594 16:31:17 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:46.594 Running I/O for 5 seconds... 00:13:48.495 16:31:19 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:13:48.495 16:31:19 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:13:48.495 16:31:19 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:13:48.495 16:31:19 -- bdev/blockdev.sh@519 -- # local iostats 00:13:48.495 16:31:19 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:48.495 16:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.495 16:31:19 -- common/autotest_common.sh@10 -- # set +x 00:13:48.495 16:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.495 16:31:19 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:48.495 16:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.495 16:31:19 -- common/autotest_common.sh@10 -- # set +x 00:13:48.495 16:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.495 16:31:19 -- bdev/blockdev.sh@523 -- # iostats='{ 00:13:48.495 "tick_rate": 2100000000, 00:13:48.495 "ticks": 1522449911044, 00:13:48.495 "bdevs": [ 00:13:48.495 { 00:13:48.495 "name": "Malloc_QD", 00:13:48.495 "bytes_read": 937464320, 00:13:48.495 "num_read_ops": 228867, 00:13:48.495 "bytes_written": 0, 00:13:48.495 "num_write_ops": 0, 00:13:48.495 "bytes_unmapped": 0, 00:13:48.495 "num_unmap_ops": 0, 00:13:48.495 "bytes_copied": 0, 00:13:48.495 "num_copy_ops": 0, 00:13:48.495 "read_latency_ticks": 2087983308858, 00:13:48.495 "max_read_latency_ticks": 10190462, 00:13:48.495 "min_read_latency_ticks": 403944, 00:13:48.495 "write_latency_ticks": 0, 00:13:48.495 "max_write_latency_ticks": 0, 00:13:48.495 "min_write_latency_ticks": 0, 00:13:48.495 "unmap_latency_ticks": 0, 00:13:48.495 "max_unmap_latency_ticks": 0, 00:13:48.495 "min_unmap_latency_ticks": 0, 00:13:48.495 "copy_latency_ticks": 0, 00:13:48.495 "max_copy_latency_ticks": 0, 00:13:48.495 "min_copy_latency_ticks": 0, 00:13:48.495 "io_error": {}, 00:13:48.495 "queue_depth_polling_period": 10, 00:13:48.495 "queue_depth": 512, 00:13:48.495 "io_time": 30, 00:13:48.495 "weighted_io_time": 15360 00:13:48.495 } 00:13:48.495 ] 00:13:48.495 }' 00:13:48.495 16:31:19 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:48.495 16:31:19 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:13:48.495 16:31:19 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:13:48.495 16:31:19 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:13:48.495 16:31:19 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:48.495 16:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.495 16:31:19 -- common/autotest_common.sh@10 -- # set +x 00:13:48.495 00:13:48.495 Latency(us) 00:13:48.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.495 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:48.495 Malloc_QD : 2.02 58530.33 228.63 0.00 0.00 4363.15 1380.94 5648.58 00:13:48.495 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:48.495 Malloc_QD : 2.02 59158.26 231.09 0.00 0.00 4317.55 1178.09 4743.56 00:13:48.495 =================================================================================================================== 00:13:48.495 Total : 117688.59 459.72 0.00 0.00 4340.23 1178.09 5648.58 00:13:48.495 0 00:13:48.495 16:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.495 16:31:19 -- bdev/blockdev.sh@552 -- # killprocess 122129 00:13:48.495 16:31:19 -- common/autotest_common.sh@926 -- # '[' -z 122129 ']' 00:13:48.495 16:31:19 -- common/autotest_common.sh@930 -- # kill -0 122129 00:13:48.495 16:31:19 -- common/autotest_common.sh@931 -- # uname 00:13:48.495 16:31:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:48.754 16:31:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122129 00:13:48.754 16:31:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:48.754 killing process with pid 122129 00:13:48.754 16:31:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:48.754 16:31:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122129' 00:13:48.754 Received shutdown signal, test time was about 2.094769 seconds 00:13:48.754 00:13:48.754 Latency(us) 00:13:48.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.754 =================================================================================================================== 00:13:48.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:48.754 16:31:19 -- common/autotest_common.sh@945 -- # kill 122129 00:13:48.754 16:31:19 -- common/autotest_common.sh@950 -- # wait 122129 00:13:49.014 16:31:20 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:13:49.014 00:13:49.014 real 0m3.758s 00:13:49.014 user 0m7.114s 00:13:49.014 sys 0m0.470s 00:13:49.014 16:31:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.014 16:31:20 -- common/autotest_common.sh@10 -- # set +x 00:13:49.014 ************************************ 00:13:49.014 END TEST bdev_qd_sampling 00:13:49.014 ************************************ 00:13:49.273 16:31:20 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:13:49.273 16:31:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:49.273 16:31:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:49.273 16:31:20 -- common/autotest_common.sh@10 -- # set +x 00:13:49.273 ************************************ 00:13:49.273 START TEST bdev_error 00:13:49.273 ************************************ 00:13:49.273 16:31:20 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:13:49.273 16:31:20 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:13:49.273 16:31:20 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:13:49.273 16:31:20 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:13:49.273 16:31:20 -- bdev/blockdev.sh@470 -- # ERR_PID=122217 00:13:49.273 16:31:20 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 122217' 00:13:49.273 Process error testing pid: 122217 00:13:49.273 16:31:20 -- bdev/blockdev.sh@472 -- # waitforlisten 122217 00:13:49.273 16:31:20 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:49.273 16:31:20 -- common/autotest_common.sh@819 -- # '[' -z 122217 ']' 00:13:49.273 16:31:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.273 16:31:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:49.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.274 16:31:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.274 16:31:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:49.274 16:31:20 -- common/autotest_common.sh@10 -- # set +x 00:13:49.274 [2024-07-13 16:31:20.560574] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:49.274 [2024-07-13 16:31:20.560780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122217 ] 00:13:49.274 [2024-07-13 16:31:20.704700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.532 [2024-07-13 16:31:20.781608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.101 16:31:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:50.101 16:31:21 -- common/autotest_common.sh@852 -- # return 0 00:13:50.101 16:31:21 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:50.101 16:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.101 16:31:21 -- common/autotest_common.sh@10 -- # set +x 00:13:50.101 Dev_1 00:13:50.101 16:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.101 16:31:21 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:13:50.101 16:31:21 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:13:50.101 16:31:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:50.101 16:31:21 -- common/autotest_common.sh@889 -- # local i 00:13:50.101 16:31:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:50.101 16:31:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:50.101 16:31:21 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:50.101 16:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.101 16:31:21 -- common/autotest_common.sh@10 -- # set +x 00:13:50.101 16:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.101 16:31:21 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:50.101 16:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.101 16:31:21 -- common/autotest_common.sh@10 -- # set +x 00:13:50.101 [ 00:13:50.101 { 00:13:50.101 "name": "Dev_1", 00:13:50.101 "aliases": [ 00:13:50.101 "d14a3873-484e-45d5-a11b-deb2320a72d5" 00:13:50.101 ], 00:13:50.101 "product_name": "Malloc disk", 00:13:50.101 "block_size": 512, 00:13:50.101 "num_blocks": 262144, 00:13:50.101 "uuid": "d14a3873-484e-45d5-a11b-deb2320a72d5", 00:13:50.101 "assigned_rate_limits": { 00:13:50.101 "rw_ios_per_sec": 0, 00:13:50.101 "rw_mbytes_per_sec": 0, 00:13:50.101 "r_mbytes_per_sec": 0, 00:13:50.101 "w_mbytes_per_sec": 0 00:13:50.101 }, 00:13:50.101 "claimed": false, 00:13:50.101 "zoned": false, 00:13:50.101 "supported_io_types": { 00:13:50.101 "read": true, 00:13:50.101 "write": true, 00:13:50.101 "unmap": true, 00:13:50.101 "write_zeroes": true, 00:13:50.101 "flush": true, 00:13:50.101 "reset": true, 00:13:50.101 "compare": false, 00:13:50.101 "compare_and_write": false, 00:13:50.101 "abort": true, 00:13:50.101 "nvme_admin": false, 00:13:50.101 "nvme_io": false 00:13:50.101 }, 00:13:50.101 "memory_domains": [ 00:13:50.101 { 00:13:50.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.101 "dma_device_type": 2 00:13:50.101 } 00:13:50.101 ], 00:13:50.101 "driver_specific": {} 00:13:50.101 } 00:13:50.101 ] 00:13:50.101 16:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.101 16:31:21 -- common/autotest_common.sh@895 -- # return 0 00:13:50.101 16:31:21 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:13:50.101 16:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.101 16:31:21 -- common/autotest_common.sh@10 -- # set +x 00:13:50.101 true 00:13:50.101 16:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.101 16:31:21 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:50.101 16:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.101 16:31:21 -- common/autotest_common.sh@10 -- # set +x 00:13:50.360 Dev_2 00:13:50.360 16:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.360 16:31:21 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:13:50.360 16:31:21 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:13:50.360 16:31:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:50.360 16:31:21 -- common/autotest_common.sh@889 -- # local i 00:13:50.360 16:31:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:50.360 16:31:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:50.360 16:31:21 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:50.360 16:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.360 16:31:21 -- common/autotest_common.sh@10 -- # set +x 00:13:50.360 16:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.360 16:31:21 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:50.360 16:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.360 16:31:21 -- common/autotest_common.sh@10 -- # set +x 00:13:50.360 [ 00:13:50.360 { 00:13:50.360 "name": "Dev_2", 00:13:50.360 "aliases": [ 00:13:50.360 "e04f5232-29c4-4d40-b84a-5772516fbc5f" 00:13:50.360 ], 00:13:50.360 "product_name": "Malloc disk", 00:13:50.360 "block_size": 512, 00:13:50.360 "num_blocks": 262144, 00:13:50.360 "uuid": "e04f5232-29c4-4d40-b84a-5772516fbc5f", 00:13:50.360 "assigned_rate_limits": { 00:13:50.360 "rw_ios_per_sec": 0, 00:13:50.360 "rw_mbytes_per_sec": 0, 00:13:50.360 "r_mbytes_per_sec": 0, 00:13:50.360 "w_mbytes_per_sec": 0 00:13:50.360 }, 00:13:50.360 "claimed": false, 00:13:50.360 "zoned": false, 00:13:50.360 "supported_io_types": { 00:13:50.360 "read": true, 00:13:50.360 "write": true, 00:13:50.360 "unmap": true, 00:13:50.360 "write_zeroes": true, 00:13:50.360 "flush": true, 00:13:50.360 "reset": true, 00:13:50.360 "compare": false, 00:13:50.360 "compare_and_write": false, 00:13:50.360 "abort": true, 00:13:50.360 "nvme_admin": false, 00:13:50.360 "nvme_io": false 00:13:50.360 }, 00:13:50.360 "memory_domains": [ 00:13:50.360 { 00:13:50.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.360 "dma_device_type": 2 00:13:50.360 } 00:13:50.360 ], 00:13:50.360 "driver_specific": {} 00:13:50.360 } 00:13:50.360 ] 00:13:50.360 16:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.360 16:31:21 -- common/autotest_common.sh@895 -- # return 0 00:13:50.360 16:31:21 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:50.360 16:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.360 16:31:21 -- common/autotest_common.sh@10 -- # set +x 00:13:50.360 16:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.360 16:31:21 -- bdev/blockdev.sh@482 -- # sleep 1 00:13:50.360 16:31:21 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:50.360 Running I/O for 5 seconds... 00:13:51.297 16:31:22 -- bdev/blockdev.sh@485 -- # kill -0 122217 00:13:51.297 Process is existed as continue on error is set. Pid: 122217 00:13:51.297 16:31:22 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 122217' 00:13:51.297 16:31:22 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:51.297 16:31:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.297 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:13:51.297 16:31:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.297 16:31:22 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:51.297 16:31:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.297 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:13:51.297 16:31:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.297 16:31:22 -- bdev/blockdev.sh@495 -- # sleep 5 00:13:51.297 Timeout while waiting for response: 00:13:51.297 00:13:51.297 00:13:55.511 00:13:55.511 Latency(us) 00:13:55.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.511 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:55.511 EE_Dev_1 : 0.93 50265.77 196.35 5.40 0.00 316.01 141.41 667.06 00:13:55.511 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:55.511 Dev_2 : 5.00 107500.01 419.92 0.00 0.00 146.57 68.75 35701.52 00:13:55.511 =================================================================================================================== 00:13:55.511 Total : 157765.78 616.27 5.40 0.00 160.07 68.75 35701.52 00:13:56.448 16:31:27 -- bdev/blockdev.sh@497 -- # killprocess 122217 00:13:56.448 16:31:27 -- common/autotest_common.sh@926 -- # '[' -z 122217 ']' 00:13:56.448 16:31:27 -- common/autotest_common.sh@930 -- # kill -0 122217 00:13:56.448 16:31:27 -- common/autotest_common.sh@931 -- # uname 00:13:56.448 16:31:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:56.448 16:31:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122217 00:13:56.448 16:31:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:56.448 16:31:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:56.448 killing process with pid 122217 00:13:56.448 16:31:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122217' 00:13:56.448 Received shutdown signal, test time was about 5.000000 seconds 00:13:56.448 00:13:56.448 Latency(us) 00:13:56.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.448 =================================================================================================================== 00:13:56.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:56.448 16:31:27 -- common/autotest_common.sh@945 -- # kill 122217 00:13:56.448 16:31:27 -- common/autotest_common.sh@950 -- # wait 122217 00:13:57.018 16:31:28 -- bdev/blockdev.sh@501 -- # ERR_PID=122314 00:13:57.018 16:31:28 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:57.018 Process error testing pid: 122314 00:13:57.018 16:31:28 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 122314' 00:13:57.018 16:31:28 -- bdev/blockdev.sh@503 -- # waitforlisten 122314 00:13:57.018 16:31:28 -- common/autotest_common.sh@819 -- # '[' -z 122314 ']' 00:13:57.018 16:31:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.018 16:31:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:57.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.018 16:31:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.018 16:31:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:57.018 16:31:28 -- common/autotest_common.sh@10 -- # set +x 00:13:57.018 [2024-07-13 16:31:28.273434] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:57.018 [2024-07-13 16:31:28.273713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122314 ] 00:13:57.018 [2024-07-13 16:31:28.428672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.282 [2024-07-13 16:31:28.503325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.847 16:31:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:57.847 16:31:29 -- common/autotest_common.sh@852 -- # return 0 00:13:57.847 16:31:29 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:57.847 16:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.847 16:31:29 -- common/autotest_common.sh@10 -- # set +x 00:13:57.847 Dev_1 00:13:57.847 16:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.847 16:31:29 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:13:57.847 16:31:29 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:13:57.847 16:31:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:57.847 16:31:29 -- common/autotest_common.sh@889 -- # local i 00:13:57.847 16:31:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:57.847 16:31:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:57.847 16:31:29 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:57.847 16:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.847 16:31:29 -- common/autotest_common.sh@10 -- # set +x 00:13:57.847 16:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.847 16:31:29 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:57.847 16:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.847 16:31:29 -- common/autotest_common.sh@10 -- # set +x 00:13:57.847 [ 00:13:57.847 { 00:13:57.847 "name": "Dev_1", 00:13:57.847 "aliases": [ 00:13:57.847 "7620e635-6a7f-4c52-ba87-d6f3ed6fa32d" 00:13:57.847 ], 00:13:57.847 "product_name": "Malloc disk", 00:13:57.847 "block_size": 512, 00:13:57.847 "num_blocks": 262144, 00:13:57.847 "uuid": "7620e635-6a7f-4c52-ba87-d6f3ed6fa32d", 00:13:57.847 "assigned_rate_limits": { 00:13:57.847 "rw_ios_per_sec": 0, 00:13:57.847 "rw_mbytes_per_sec": 0, 00:13:57.847 "r_mbytes_per_sec": 0, 00:13:57.847 "w_mbytes_per_sec": 0 00:13:57.847 }, 00:13:57.847 "claimed": false, 00:13:57.847 "zoned": false, 00:13:57.847 "supported_io_types": { 00:13:57.847 "read": true, 00:13:57.847 "write": true, 00:13:57.847 "unmap": true, 00:13:57.847 "write_zeroes": true, 00:13:57.847 "flush": true, 00:13:57.847 "reset": true, 00:13:57.847 "compare": false, 00:13:57.847 "compare_and_write": false, 00:13:57.847 "abort": true, 00:13:57.847 "nvme_admin": false, 00:13:57.847 "nvme_io": false 00:13:57.847 }, 00:13:57.847 "memory_domains": [ 00:13:57.847 { 00:13:57.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.847 "dma_device_type": 2 00:13:57.847 } 00:13:57.847 ], 00:13:57.847 "driver_specific": {} 00:13:57.847 } 00:13:57.847 ] 00:13:57.847 16:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.847 16:31:29 -- common/autotest_common.sh@895 -- # return 0 00:13:57.847 16:31:29 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:13:57.847 16:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.847 16:31:29 -- common/autotest_common.sh@10 -- # set +x 00:13:57.847 true 00:13:57.847 16:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.847 16:31:29 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:57.847 16:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.847 16:31:29 -- common/autotest_common.sh@10 -- # set +x 00:13:58.106 Dev_2 00:13:58.106 16:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.106 16:31:29 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:13:58.106 16:31:29 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:13:58.106 16:31:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:58.106 16:31:29 -- common/autotest_common.sh@889 -- # local i 00:13:58.106 16:31:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:58.106 16:31:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:58.106 16:31:29 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:58.106 16:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.106 16:31:29 -- common/autotest_common.sh@10 -- # set +x 00:13:58.106 16:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.106 16:31:29 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:58.106 16:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.106 16:31:29 -- common/autotest_common.sh@10 -- # set +x 00:13:58.106 [ 00:13:58.106 { 00:13:58.106 "name": "Dev_2", 00:13:58.106 "aliases": [ 00:13:58.106 "bcb7d55d-39dc-40b2-ab6a-6085519c674b" 00:13:58.106 ], 00:13:58.106 "product_name": "Malloc disk", 00:13:58.106 "block_size": 512, 00:13:58.106 "num_blocks": 262144, 00:13:58.106 "uuid": "bcb7d55d-39dc-40b2-ab6a-6085519c674b", 00:13:58.106 "assigned_rate_limits": { 00:13:58.106 "rw_ios_per_sec": 0, 00:13:58.106 "rw_mbytes_per_sec": 0, 00:13:58.106 "r_mbytes_per_sec": 0, 00:13:58.107 "w_mbytes_per_sec": 0 00:13:58.107 }, 00:13:58.107 "claimed": false, 00:13:58.107 "zoned": false, 00:13:58.107 "supported_io_types": { 00:13:58.107 "read": true, 00:13:58.107 "write": true, 00:13:58.107 "unmap": true, 00:13:58.107 "write_zeroes": true, 00:13:58.107 "flush": true, 00:13:58.107 "reset": true, 00:13:58.107 "compare": false, 00:13:58.107 "compare_and_write": false, 00:13:58.107 "abort": true, 00:13:58.107 "nvme_admin": false, 00:13:58.107 "nvme_io": false 00:13:58.107 }, 00:13:58.107 "memory_domains": [ 00:13:58.107 { 00:13:58.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.107 "dma_device_type": 2 00:13:58.107 } 00:13:58.107 ], 00:13:58.107 "driver_specific": {} 00:13:58.107 } 00:13:58.107 ] 00:13:58.107 16:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.107 16:31:29 -- common/autotest_common.sh@895 -- # return 0 00:13:58.107 16:31:29 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:58.107 16:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.107 16:31:29 -- common/autotest_common.sh@10 -- # set +x 00:13:58.107 16:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.107 16:31:29 -- bdev/blockdev.sh@513 -- # NOT wait 122314 00:13:58.107 16:31:29 -- common/autotest_common.sh@640 -- # local es=0 00:13:58.107 16:31:29 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 122314 00:13:58.107 16:31:29 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:58.107 16:31:29 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:58.107 16:31:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:58.107 16:31:29 -- common/autotest_common.sh@632 -- # type -t wait 00:13:58.107 16:31:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:58.107 16:31:29 -- common/autotest_common.sh@643 -- # wait 122314 00:13:58.107 Running I/O for 5 seconds... 00:13:58.107 task offset: 116624 on job bdev=EE_Dev_1 fails 00:13:58.107 00:13:58.107 Latency(us) 00:13:58.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.107 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:58.107 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:58.107 EE_Dev_1 : 0.00 28909.33 112.93 6570.30 0.00 362.48 146.29 667.06 00:13:58.107 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:58.107 Dev_2 : 0.00 21376.09 83.50 0.00 0.00 504.32 148.24 908.92 00:13:58.107 =================================================================================================================== 00:13:58.107 Total : 50285.42 196.43 6570.30 0.00 439.41 146.29 908.92 00:13:58.107 [2024-07-13 16:31:29.497172] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:58.107 request: 00:13:58.107 { 00:13:58.107 "method": "perform_tests", 00:13:58.107 "req_id": 1 00:13:58.107 } 00:13:58.107 Got JSON-RPC error response 00:13:58.107 response: 00:13:58.107 { 00:13:58.107 "code": -32603, 00:13:58.107 "message": "bdevperf failed with error Operation not permitted" 00:13:58.107 } 00:13:58.675 16:31:30 -- common/autotest_common.sh@643 -- # es=255 00:13:58.675 16:31:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:58.675 16:31:30 -- common/autotest_common.sh@652 -- # es=127 00:13:58.675 16:31:30 -- common/autotest_common.sh@653 -- # case "$es" in 00:13:58.675 16:31:30 -- common/autotest_common.sh@660 -- # es=1 00:13:58.675 16:31:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:58.675 00:13:58.675 real 0m9.554s 00:13:58.675 user 0m9.529s 00:13:58.675 sys 0m1.006s 00:13:58.675 16:31:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.675 16:31:30 -- common/autotest_common.sh@10 -- # set +x 00:13:58.675 ************************************ 00:13:58.675 END TEST bdev_error 00:13:58.675 ************************************ 00:13:58.675 16:31:30 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:13:58.675 16:31:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:58.675 16:31:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:58.675 16:31:30 -- common/autotest_common.sh@10 -- # set +x 00:13:58.675 ************************************ 00:13:58.675 START TEST bdev_stat 00:13:58.675 ************************************ 00:13:58.675 16:31:30 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:13:58.675 16:31:30 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:13:58.675 16:31:30 -- bdev/blockdev.sh@594 -- # STAT_PID=122365 00:13:58.675 16:31:30 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:58.675 16:31:30 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 122365' 00:13:58.675 Process Bdev IO statistics testing pid: 122365 00:13:58.675 16:31:30 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:58.675 16:31:30 -- bdev/blockdev.sh@597 -- # waitforlisten 122365 00:13:58.675 16:31:30 -- common/autotest_common.sh@819 -- # '[' -z 122365 ']' 00:13:58.675 16:31:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.675 16:31:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:58.675 16:31:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.675 16:31:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:58.675 16:31:30 -- common/autotest_common.sh@10 -- # set +x 00:13:58.933 [2024-07-13 16:31:30.187991] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:58.933 [2024-07-13 16:31:30.188383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122365 ] 00:13:58.933 [2024-07-13 16:31:30.341891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:59.191 [2024-07-13 16:31:30.432193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.191 [2024-07-13 16:31:30.432196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.756 16:31:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:59.756 16:31:31 -- common/autotest_common.sh@852 -- # return 0 00:13:59.756 16:31:31 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:59.756 16:31:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.756 16:31:31 -- common/autotest_common.sh@10 -- # set +x 00:13:59.756 Malloc_STAT 00:13:59.756 16:31:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.756 16:31:31 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:13:59.756 16:31:31 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:13:59.756 16:31:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:59.756 16:31:31 -- common/autotest_common.sh@889 -- # local i 00:13:59.756 16:31:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:59.756 16:31:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:59.756 16:31:31 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:59.756 16:31:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.756 16:31:31 -- common/autotest_common.sh@10 -- # set +x 00:13:59.756 16:31:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.756 16:31:31 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:59.756 16:31:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.756 16:31:31 -- common/autotest_common.sh@10 -- # set +x 00:13:59.756 [ 00:13:59.756 { 00:13:59.756 "name": "Malloc_STAT", 00:13:59.756 "aliases": [ 00:13:59.756 "2299a3ab-3c40-4ac5-af57-6bf2776c676c" 00:13:59.756 ], 00:13:59.756 "product_name": "Malloc disk", 00:13:59.756 "block_size": 512, 00:13:59.756 "num_blocks": 262144, 00:13:59.756 "uuid": "2299a3ab-3c40-4ac5-af57-6bf2776c676c", 00:13:59.756 "assigned_rate_limits": { 00:13:59.756 "rw_ios_per_sec": 0, 00:13:59.756 "rw_mbytes_per_sec": 0, 00:13:59.756 "r_mbytes_per_sec": 0, 00:13:59.756 "w_mbytes_per_sec": 0 00:13:59.756 }, 00:13:59.756 "claimed": false, 00:13:59.756 "zoned": false, 00:13:59.756 "supported_io_types": { 00:13:59.756 "read": true, 00:13:59.756 "write": true, 00:13:59.756 "unmap": true, 00:13:59.756 "write_zeroes": true, 00:13:59.756 "flush": true, 00:13:59.756 "reset": true, 00:13:59.756 "compare": false, 00:13:59.756 "compare_and_write": false, 00:13:59.756 "abort": true, 00:13:59.756 "nvme_admin": false, 00:13:59.756 "nvme_io": false 00:13:59.756 }, 00:13:59.756 "memory_domains": [ 00:13:59.756 { 00:13:59.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.756 "dma_device_type": 2 00:13:59.756 } 00:13:59.756 ], 00:13:59.756 "driver_specific": {} 00:13:59.756 } 00:13:59.756 ] 00:13:59.756 16:31:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.756 16:31:31 -- common/autotest_common.sh@895 -- # return 0 00:13:59.756 16:31:31 -- bdev/blockdev.sh@603 -- # sleep 2 00:13:59.756 16:31:31 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:00.014 Running I/O for 10 seconds... 00:14:01.937 16:31:33 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:14:01.937 16:31:33 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:14:01.937 16:31:33 -- bdev/blockdev.sh@558 -- # local iostats 00:14:01.937 16:31:33 -- bdev/blockdev.sh@559 -- # local io_count1 00:14:01.937 16:31:33 -- bdev/blockdev.sh@560 -- # local io_count2 00:14:01.937 16:31:33 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:14:01.937 16:31:33 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:14:01.937 16:31:33 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:14:01.937 16:31:33 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:14:01.937 16:31:33 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:01.937 16:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.937 16:31:33 -- common/autotest_common.sh@10 -- # set +x 00:14:01.937 16:31:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.937 16:31:33 -- bdev/blockdev.sh@566 -- # iostats='{ 00:14:01.937 "tick_rate": 2100000000, 00:14:01.937 "ticks": 1550426295772, 00:14:01.937 "bdevs": [ 00:14:01.937 { 00:14:01.937 "name": "Malloc_STAT", 00:14:01.937 "bytes_read": 950047232, 00:14:01.937 "num_read_ops": 231939, 00:14:01.937 "bytes_written": 0, 00:14:01.937 "num_write_ops": 0, 00:14:01.937 "bytes_unmapped": 0, 00:14:01.937 "num_unmap_ops": 0, 00:14:01.937 "bytes_copied": 0, 00:14:01.937 "num_copy_ops": 0, 00:14:01.937 "read_latency_ticks": 2074908118586, 00:14:01.937 "max_read_latency_ticks": 12388102, 00:14:01.937 "min_read_latency_ticks": 413130, 00:14:01.937 "write_latency_ticks": 0, 00:14:01.937 "max_write_latency_ticks": 0, 00:14:01.937 "min_write_latency_ticks": 0, 00:14:01.937 "unmap_latency_ticks": 0, 00:14:01.937 "max_unmap_latency_ticks": 0, 00:14:01.937 "min_unmap_latency_ticks": 0, 00:14:01.937 "copy_latency_ticks": 0, 00:14:01.937 "max_copy_latency_ticks": 0, 00:14:01.937 "min_copy_latency_ticks": 0, 00:14:01.937 "io_error": {} 00:14:01.937 } 00:14:01.937 ] 00:14:01.937 }' 00:14:01.937 16:31:33 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:14:01.937 16:31:33 -- bdev/blockdev.sh@567 -- # io_count1=231939 00:14:01.937 16:31:33 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:01.937 16:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.937 16:31:33 -- common/autotest_common.sh@10 -- # set +x 00:14:01.937 16:31:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.937 16:31:33 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:14:01.937 "tick_rate": 2100000000, 00:14:01.937 "ticks": 1550576907956, 00:14:01.937 "name": "Malloc_STAT", 00:14:01.937 "channels": [ 00:14:01.937 { 00:14:01.937 "thread_id": 2, 00:14:01.937 "bytes_read": 488636416, 00:14:01.937 "num_read_ops": 119296, 00:14:01.937 "bytes_written": 0, 00:14:01.937 "num_write_ops": 0, 00:14:01.937 "bytes_unmapped": 0, 00:14:01.937 "num_unmap_ops": 0, 00:14:01.937 "bytes_copied": 0, 00:14:01.937 "num_copy_ops": 0, 00:14:01.937 "read_latency_ticks": 1074680585082, 00:14:01.937 "max_read_latency_ticks": 13231854, 00:14:01.937 "min_read_latency_ticks": 6934588, 00:14:01.937 "write_latency_ticks": 0, 00:14:01.937 "max_write_latency_ticks": 0, 00:14:01.937 "min_write_latency_ticks": 0, 00:14:01.937 "unmap_latency_ticks": 0, 00:14:01.937 "max_unmap_latency_ticks": 0, 00:14:01.937 "min_unmap_latency_ticks": 0, 00:14:01.937 "copy_latency_ticks": 0, 00:14:01.937 "max_copy_latency_ticks": 0, 00:14:01.937 "min_copy_latency_ticks": 0 00:14:01.937 }, 00:14:01.937 { 00:14:01.937 "thread_id": 3, 00:14:01.937 "bytes_read": 495976448, 00:14:01.937 "num_read_ops": 121088, 00:14:01.937 "bytes_written": 0, 00:14:01.937 "num_write_ops": 0, 00:14:01.937 "bytes_unmapped": 0, 00:14:01.937 "num_unmap_ops": 0, 00:14:01.937 "bytes_copied": 0, 00:14:01.937 "num_copy_ops": 0, 00:14:01.937 "read_latency_ticks": 1075576716978, 00:14:01.937 "max_read_latency_ticks": 9834268, 00:14:01.937 "min_read_latency_ticks": 5584830, 00:14:01.937 "write_latency_ticks": 0, 00:14:01.937 "max_write_latency_ticks": 0, 00:14:01.937 "min_write_latency_ticks": 0, 00:14:01.937 "unmap_latency_ticks": 0, 00:14:01.937 "max_unmap_latency_ticks": 0, 00:14:01.937 "min_unmap_latency_ticks": 0, 00:14:01.937 "copy_latency_ticks": 0, 00:14:01.937 "max_copy_latency_ticks": 0, 00:14:01.937 "min_copy_latency_ticks": 0 00:14:01.937 } 00:14:01.937 ] 00:14:01.937 }' 00:14:01.937 16:31:33 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:14:01.937 16:31:33 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=119296 00:14:01.937 16:31:33 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=119296 00:14:01.937 16:31:33 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:14:01.937 16:31:33 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=121088 00:14:01.937 16:31:33 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=240384 00:14:01.937 16:31:33 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:01.937 16:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.937 16:31:33 -- common/autotest_common.sh@10 -- # set +x 00:14:01.937 16:31:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.937 16:31:33 -- bdev/blockdev.sh@575 -- # iostats='{ 00:14:01.937 "tick_rate": 2100000000, 00:14:01.937 "ticks": 1550820931754, 00:14:01.937 "bdevs": [ 00:14:01.937 { 00:14:01.937 "name": "Malloc_STAT", 00:14:01.937 "bytes_read": 1042321920, 00:14:01.937 "num_read_ops": 254467, 00:14:01.937 "bytes_written": 0, 00:14:01.937 "num_write_ops": 0, 00:14:01.937 "bytes_unmapped": 0, 00:14:01.937 "num_unmap_ops": 0, 00:14:01.937 "bytes_copied": 0, 00:14:01.937 "num_copy_ops": 0, 00:14:01.937 "read_latency_ticks": 2276338585298, 00:14:01.937 "max_read_latency_ticks": 13666378, 00:14:01.937 "min_read_latency_ticks": 413130, 00:14:01.937 "write_latency_ticks": 0, 00:14:01.937 "max_write_latency_ticks": 0, 00:14:01.937 "min_write_latency_ticks": 0, 00:14:01.937 "unmap_latency_ticks": 0, 00:14:01.937 "max_unmap_latency_ticks": 0, 00:14:01.937 "min_unmap_latency_ticks": 0, 00:14:01.937 "copy_latency_ticks": 0, 00:14:01.937 "max_copy_latency_ticks": 0, 00:14:01.937 "min_copy_latency_ticks": 0, 00:14:01.937 "io_error": {} 00:14:01.937 } 00:14:01.937 ] 00:14:01.937 }' 00:14:01.937 16:31:33 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:14:02.196 16:31:33 -- bdev/blockdev.sh@576 -- # io_count2=254467 00:14:02.196 16:31:33 -- bdev/blockdev.sh@581 -- # '[' 240384 -lt 231939 ']' 00:14:02.196 16:31:33 -- bdev/blockdev.sh@581 -- # '[' 240384 -gt 254467 ']' 00:14:02.196 16:31:33 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:02.196 16:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.196 16:31:33 -- common/autotest_common.sh@10 -- # set +x 00:14:02.196 00:14:02.196 Latency(us) 00:14:02.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.196 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:02.196 Malloc_STAT : 2.19 59347.42 231.83 0.00 0.00 4304.31 1022.05 7333.79 00:14:02.196 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:02.196 Malloc_STAT : 2.19 60692.96 237.08 0.00 0.00 4209.11 721.68 4712.35 00:14:02.196 =================================================================================================================== 00:14:02.196 Total : 120040.38 468.91 0.00 0.00 4256.15 721.68 7333.79 00:14:02.196 0 00:14:02.196 16:31:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.196 16:31:33 -- bdev/blockdev.sh@607 -- # killprocess 122365 00:14:02.196 16:31:33 -- common/autotest_common.sh@926 -- # '[' -z 122365 ']' 00:14:02.196 16:31:33 -- common/autotest_common.sh@930 -- # kill -0 122365 00:14:02.196 16:31:33 -- common/autotest_common.sh@931 -- # uname 00:14:02.196 16:31:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:02.196 16:31:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122365 00:14:02.196 16:31:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:02.196 killing process with pid 122365 00:14:02.196 16:31:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:02.196 16:31:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122365' 00:14:02.196 16:31:33 -- common/autotest_common.sh@945 -- # kill 122365 00:14:02.196 Received shutdown signal, test time was about 2.258546 seconds 00:14:02.196 00:14:02.196 Latency(us) 00:14:02.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.196 =================================================================================================================== 00:14:02.196 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:02.196 16:31:33 -- common/autotest_common.sh@950 -- # wait 122365 00:14:02.455 ************************************ 00:14:02.455 END TEST bdev_stat 00:14:02.455 ************************************ 00:14:02.455 16:31:33 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:14:02.455 00:14:02.455 real 0m3.782s 00:14:02.455 user 0m7.202s 00:14:02.455 sys 0m0.500s 00:14:02.455 16:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.455 16:31:33 -- common/autotest_common.sh@10 -- # set +x 00:14:02.714 16:31:33 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:14:02.714 16:31:33 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:14:02.714 16:31:33 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:14:02.714 16:31:33 -- bdev/blockdev.sh@809 -- # cleanup 00:14:02.714 16:31:33 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:02.714 16:31:33 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:02.714 16:31:33 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:14:02.714 16:31:33 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:14:02.714 16:31:33 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:14:02.714 16:31:33 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:14:02.714 ************************************ 00:14:02.714 END TEST blockdev_general 00:14:02.714 ************************************ 00:14:02.714 00:14:02.714 real 1m59.420s 00:14:02.714 user 5m11.311s 00:14:02.714 sys 0m25.621s 00:14:02.714 16:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.714 16:31:33 -- common/autotest_common.sh@10 -- # set +x 00:14:02.714 16:31:34 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:02.714 16:31:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:02.714 16:31:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.714 16:31:34 -- common/autotest_common.sh@10 -- # set +x 00:14:02.714 ************************************ 00:14:02.714 START TEST bdev_raid 00:14:02.714 ************************************ 00:14:02.715 16:31:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:02.715 * Looking for test storage... 00:14:02.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:02.715 16:31:34 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:02.715 16:31:34 -- bdev/nbd_common.sh@6 -- # set -e 00:14:02.715 16:31:34 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:02.715 16:31:34 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:14:02.715 16:31:34 -- bdev/bdev_raid.sh@716 -- # uname -s 00:14:02.715 16:31:34 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:14:02.715 16:31:34 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:14:02.715 16:31:34 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:14:02.715 16:31:34 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:14:02.715 16:31:34 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:02.715 16:31:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:02.715 16:31:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.715 16:31:34 -- common/autotest_common.sh@10 -- # set +x 00:14:02.973 ************************************ 00:14:02.973 START TEST raid_function_test_raid0 00:14:02.973 ************************************ 00:14:02.973 16:31:34 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:14:02.973 16:31:34 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:14:02.973 16:31:34 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:02.974 16:31:34 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:02.974 16:31:34 -- bdev/bdev_raid.sh@86 -- # raid_pid=122512 00:14:02.974 16:31:34 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 122512' 00:14:02.974 Process raid pid: 122512 00:14:02.974 16:31:34 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:02.974 16:31:34 -- bdev/bdev_raid.sh@88 -- # waitforlisten 122512 /var/tmp/spdk-raid.sock 00:14:02.974 16:31:34 -- common/autotest_common.sh@819 -- # '[' -z 122512 ']' 00:14:02.974 16:31:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:02.974 16:31:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:02.974 16:31:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:02.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:02.974 16:31:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:02.974 16:31:34 -- common/autotest_common.sh@10 -- # set +x 00:14:02.974 [2024-07-13 16:31:34.257884] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:02.974 [2024-07-13 16:31:34.258381] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.974 [2024-07-13 16:31:34.413590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.232 [2024-07-13 16:31:34.493389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.233 [2024-07-13 16:31:34.571576] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.800 16:31:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:03.800 16:31:35 -- common/autotest_common.sh@852 -- # return 0 00:14:03.800 16:31:35 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:14:03.800 16:31:35 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:14:03.800 16:31:35 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:03.800 16:31:35 -- bdev/bdev_raid.sh@70 -- # cat 00:14:03.800 16:31:35 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:04.059 [2024-07-13 16:31:35.444657] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:04.059 [2024-07-13 16:31:35.448099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:04.059 [2024-07-13 16:31:35.448365] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:04.059 [2024-07-13 16:31:35.448502] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:04.059 [2024-07-13 16:31:35.448860] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:04.059 [2024-07-13 16:31:35.449614] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:04.059 [2024-07-13 16:31:35.449774] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:14:04.059 [2024-07-13 16:31:35.450074] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.059 Base_1 00:14:04.059 Base_2 00:14:04.059 16:31:35 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:04.059 16:31:35 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:04.059 16:31:35 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:04.319 16:31:35 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:04.319 16:31:35 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:04.319 16:31:35 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:04.319 16:31:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:04.319 16:31:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:04.319 16:31:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:04.319 16:31:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:04.319 16:31:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:04.319 16:31:35 -- bdev/nbd_common.sh@12 -- # local i 00:14:04.319 16:31:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:04.319 16:31:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.319 16:31:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:04.579 [2024-07-13 16:31:35.894239] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:04.579 /dev/nbd0 00:14:04.579 16:31:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:04.579 16:31:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:04.579 16:31:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:04.579 16:31:35 -- common/autotest_common.sh@857 -- # local i 00:14:04.579 16:31:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:04.579 16:31:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:04.579 16:31:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:04.579 16:31:35 -- common/autotest_common.sh@861 -- # break 00:14:04.579 16:31:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:04.579 16:31:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:04.579 16:31:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.579 1+0 records in 00:14:04.579 1+0 records out 00:14:04.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457474 s, 9.0 MB/s 00:14:04.579 16:31:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.579 16:31:35 -- common/autotest_common.sh@874 -- # size=4096 00:14:04.579 16:31:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.579 16:31:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:04.579 16:31:35 -- common/autotest_common.sh@877 -- # return 0 00:14:04.579 16:31:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.579 16:31:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.579 16:31:35 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:04.579 16:31:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:04.579 16:31:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:04.839 16:31:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:04.839 { 00:14:04.839 "nbd_device": "/dev/nbd0", 00:14:04.839 "bdev_name": "raid" 00:14:04.839 } 00:14:04.839 ]' 00:14:04.839 16:31:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:04.839 16:31:36 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:04.839 { 00:14:04.839 "nbd_device": "/dev/nbd0", 00:14:04.839 "bdev_name": "raid" 00:14:04.839 } 00:14:04.839 ]' 00:14:04.839 16:31:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:04.839 16:31:36 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:04.839 16:31:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:04.839 16:31:36 -- bdev/nbd_common.sh@65 -- # count=1 00:14:04.839 16:31:36 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:04.839 4096+0 records in 00:14:04.839 4096+0 records out 00:14:04.839 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0330221 s, 63.5 MB/s 00:14:04.839 16:31:36 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:05.099 4096+0 records in 00:14:05.099 4096+0 records out 00:14:05.099 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.249582 s, 8.4 MB/s 00:14:05.099 16:31:36 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:05.099 16:31:36 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:05.099 16:31:36 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:05.099 16:31:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:05.099 16:31:36 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:05.099 16:31:36 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:05.099 16:31:36 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:05.099 128+0 records in 00:14:05.099 128+0 records out 00:14:05.099 65536 bytes (66 kB, 64 KiB) copied, 0.00102655 s, 63.8 MB/s 00:14:05.099 16:31:36 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:05.359 2035+0 records in 00:14:05.359 2035+0 records out 00:14:05.359 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00812384 s, 128 MB/s 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:05.359 456+0 records in 00:14:05.359 456+0 records out 00:14:05.359 233472 bytes (233 kB, 228 KiB) copied, 0.00235724 s, 99.0 MB/s 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:05.359 16:31:36 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:05.359 16:31:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:05.359 16:31:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:05.359 16:31:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:05.359 16:31:36 -- bdev/nbd_common.sh@51 -- # local i 00:14:05.359 16:31:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.359 16:31:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:05.618 16:31:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:05.618 [2024-07-13 16:31:36.901504] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.618 16:31:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:05.618 16:31:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:05.618 16:31:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.618 16:31:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.618 16:31:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:05.618 16:31:36 -- bdev/nbd_common.sh@41 -- # break 00:14:05.618 16:31:36 -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.618 16:31:36 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:05.618 16:31:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:05.619 16:31:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:05.879 16:31:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:05.879 16:31:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:05.879 16:31:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:05.879 16:31:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:05.879 16:31:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:05.879 16:31:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:05.879 16:31:37 -- bdev/nbd_common.sh@65 -- # true 00:14:05.879 16:31:37 -- bdev/nbd_common.sh@65 -- # count=0 00:14:05.879 16:31:37 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:05.879 16:31:37 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:05.879 16:31:37 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:05.879 16:31:37 -- bdev/bdev_raid.sh@111 -- # killprocess 122512 00:14:05.879 16:31:37 -- common/autotest_common.sh@926 -- # '[' -z 122512 ']' 00:14:05.879 16:31:37 -- common/autotest_common.sh@930 -- # kill -0 122512 00:14:05.879 16:31:37 -- common/autotest_common.sh@931 -- # uname 00:14:05.879 16:31:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:05.879 16:31:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122512 00:14:05.879 killing process with pid 122512 00:14:05.879 16:31:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:05.879 16:31:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:05.879 16:31:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122512' 00:14:05.879 16:31:37 -- common/autotest_common.sh@945 -- # kill 122512 00:14:05.879 [2024-07-13 16:31:37.164648] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:05.879 16:31:37 -- common/autotest_common.sh@950 -- # wait 122512 00:14:05.879 [2024-07-13 16:31:37.164795] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.879 [2024-07-13 16:31:37.164876] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.879 [2024-07-13 16:31:37.164891] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:14:05.879 [2024-07-13 16:31:37.205350] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:06.139 ************************************ 00:14:06.139 END TEST raid_function_test_raid0 00:14:06.139 ************************************ 00:14:06.139 16:31:37 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:06.139 00:14:06.139 real 0m3.412s 00:14:06.139 user 0m4.272s 00:14:06.139 sys 0m1.216s 00:14:06.139 16:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:06.139 16:31:37 -- common/autotest_common.sh@10 -- # set +x 00:14:06.400 16:31:37 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:14:06.400 16:31:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:06.400 16:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:06.400 16:31:37 -- common/autotest_common.sh@10 -- # set +x 00:14:06.400 ************************************ 00:14:06.400 START TEST raid_function_test_concat 00:14:06.400 ************************************ 00:14:06.400 16:31:37 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:14:06.400 16:31:37 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:14:06.400 16:31:37 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:06.400 16:31:37 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:06.400 16:31:37 -- bdev/bdev_raid.sh@86 -- # raid_pid=122665 00:14:06.400 16:31:37 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 122665' 00:14:06.400 16:31:37 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:06.400 Process raid pid: 122665 00:14:06.400 16:31:37 -- bdev/bdev_raid.sh@88 -- # waitforlisten 122665 /var/tmp/spdk-raid.sock 00:14:06.400 16:31:37 -- common/autotest_common.sh@819 -- # '[' -z 122665 ']' 00:14:06.400 16:31:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:06.401 16:31:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:06.401 16:31:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:06.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:06.401 16:31:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:06.401 16:31:37 -- common/autotest_common.sh@10 -- # set +x 00:14:06.401 [2024-07-13 16:31:37.734687] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:06.401 [2024-07-13 16:31:37.735202] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.658 [2024-07-13 16:31:37.893021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.658 [2024-07-13 16:31:37.971963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.658 [2024-07-13 16:31:38.049899] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.224 16:31:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:07.224 16:31:38 -- common/autotest_common.sh@852 -- # return 0 00:14:07.224 16:31:38 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:14:07.224 16:31:38 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:14:07.224 16:31:38 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:07.224 16:31:38 -- bdev/bdev_raid.sh@70 -- # cat 00:14:07.224 16:31:38 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:07.792 [2024-07-13 16:31:38.973000] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:07.792 [2024-07-13 16:31:38.978174] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:07.792 [2024-07-13 16:31:38.978385] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:07.792 [2024-07-13 16:31:38.978532] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:07.792 [2024-07-13 16:31:38.978771] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:07.792 [2024-07-13 16:31:38.979245] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:07.792 [2024-07-13 16:31:38.979361] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:14:07.792 [2024-07-13 16:31:38.979691] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.792 Base_1 00:14:07.792 Base_2 00:14:07.792 16:31:38 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:07.792 16:31:39 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:07.792 16:31:39 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:07.792 16:31:39 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:07.792 16:31:39 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:07.792 16:31:39 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:07.792 16:31:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:07.792 16:31:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:07.792 16:31:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.792 16:31:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:07.792 16:31:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.792 16:31:39 -- bdev/nbd_common.sh@12 -- # local i 00:14:07.792 16:31:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.792 16:31:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.792 16:31:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:08.051 [2024-07-13 16:31:39.482725] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:08.051 /dev/nbd0 00:14:08.309 16:31:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:08.309 16:31:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:08.309 16:31:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:08.309 16:31:39 -- common/autotest_common.sh@857 -- # local i 00:14:08.309 16:31:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:08.309 16:31:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:08.309 16:31:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:08.309 16:31:39 -- common/autotest_common.sh@861 -- # break 00:14:08.309 16:31:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:08.309 16:31:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:08.309 16:31:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.309 1+0 records in 00:14:08.309 1+0 records out 00:14:08.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00165125 s, 2.5 MB/s 00:14:08.309 16:31:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.309 16:31:39 -- common/autotest_common.sh@874 -- # size=4096 00:14:08.309 16:31:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.309 16:31:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:08.309 16:31:39 -- common/autotest_common.sh@877 -- # return 0 00:14:08.309 16:31:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.309 16:31:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.309 16:31:39 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:08.309 16:31:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:08.309 16:31:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:08.309 16:31:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:08.309 { 00:14:08.309 "nbd_device": "/dev/nbd0", 00:14:08.309 "bdev_name": "raid" 00:14:08.309 } 00:14:08.309 ]' 00:14:08.309 16:31:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:08.309 16:31:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:08.309 { 00:14:08.309 "nbd_device": "/dev/nbd0", 00:14:08.309 "bdev_name": "raid" 00:14:08.309 } 00:14:08.309 ]' 00:14:08.309 16:31:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:08.309 16:31:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:08.309 16:31:39 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:08.568 16:31:39 -- bdev/nbd_common.sh@65 -- # count=1 00:14:08.568 16:31:39 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:08.568 4096+0 records in 00:14:08.568 4096+0 records out 00:14:08.568 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0202664 s, 103 MB/s 00:14:08.568 16:31:39 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:08.827 4096+0 records in 00:14:08.827 4096+0 records out 00:14:08.827 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.251859 s, 8.3 MB/s 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:08.827 128+0 records in 00:14:08.827 128+0 records out 00:14:08.827 65536 bytes (66 kB, 64 KiB) copied, 0.00131939 s, 49.7 MB/s 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:08.827 2035+0 records in 00:14:08.827 2035+0 records out 00:14:08.827 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00902275 s, 115 MB/s 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:08.827 456+0 records in 00:14:08.827 456+0 records out 00:14:08.827 233472 bytes (233 kB, 228 KiB) copied, 0.00339785 s, 68.7 MB/s 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:08.827 16:31:40 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:08.827 16:31:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:08.827 16:31:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:08.827 16:31:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.827 16:31:40 -- bdev/nbd_common.sh@51 -- # local i 00:14:08.827 16:31:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.827 16:31:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:09.086 [2024-07-13 16:31:40.450036] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.086 16:31:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:09.086 16:31:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:09.086 16:31:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:09.086 16:31:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.086 16:31:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.086 16:31:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:09.086 16:31:40 -- bdev/nbd_common.sh@41 -- # break 00:14:09.086 16:31:40 -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.086 16:31:40 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:09.086 16:31:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:09.086 16:31:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:09.345 16:31:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:09.345 16:31:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:09.345 16:31:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:09.345 16:31:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:09.345 16:31:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:09.345 16:31:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:09.345 16:31:40 -- bdev/nbd_common.sh@65 -- # true 00:14:09.345 16:31:40 -- bdev/nbd_common.sh@65 -- # count=0 00:14:09.345 16:31:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:09.345 16:31:40 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:09.345 16:31:40 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:09.345 16:31:40 -- bdev/bdev_raid.sh@111 -- # killprocess 122665 00:14:09.345 16:31:40 -- common/autotest_common.sh@926 -- # '[' -z 122665 ']' 00:14:09.345 16:31:40 -- common/autotest_common.sh@930 -- # kill -0 122665 00:14:09.345 16:31:40 -- common/autotest_common.sh@931 -- # uname 00:14:09.345 16:31:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:09.345 16:31:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122665 00:14:09.345 killing process with pid 122665 00:14:09.345 16:31:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:09.345 16:31:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:09.345 16:31:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122665' 00:14:09.345 16:31:40 -- common/autotest_common.sh@945 -- # kill 122665 00:14:09.345 [2024-07-13 16:31:40.758540] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.345 16:31:40 -- common/autotest_common.sh@950 -- # wait 122665 00:14:09.345 [2024-07-13 16:31:40.758711] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.345 [2024-07-13 16:31:40.758792] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.345 [2024-07-13 16:31:40.758802] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:14:09.345 [2024-07-13 16:31:40.799516] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.912 16:31:41 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:09.912 00:14:09.912 real 0m3.527s 00:14:09.912 user 0m4.558s 00:14:09.912 sys 0m1.120s 00:14:09.912 16:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:09.912 16:31:41 -- common/autotest_common.sh@10 -- # set +x 00:14:09.912 ************************************ 00:14:09.912 END TEST raid_function_test_concat 00:14:09.912 ************************************ 00:14:09.912 16:31:41 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:14:09.912 16:31:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:09.912 16:31:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:09.912 16:31:41 -- common/autotest_common.sh@10 -- # set +x 00:14:09.912 ************************************ 00:14:09.912 START TEST raid0_resize_test 00:14:09.912 ************************************ 00:14:09.912 16:31:41 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:14:09.913 16:31:41 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:14:09.913 16:31:41 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:14:09.913 16:31:41 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:14:09.913 16:31:41 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:14:09.913 16:31:41 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:14:09.913 16:31:41 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:14:09.913 16:31:41 -- bdev/bdev_raid.sh@301 -- # raid_pid=122806 00:14:09.913 16:31:41 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 122806' 00:14:09.913 Process raid pid: 122806 00:14:09.913 16:31:41 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:09.913 16:31:41 -- bdev/bdev_raid.sh@303 -- # waitforlisten 122806 /var/tmp/spdk-raid.sock 00:14:09.913 16:31:41 -- common/autotest_common.sh@819 -- # '[' -z 122806 ']' 00:14:09.913 16:31:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:09.913 16:31:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:09.913 16:31:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:09.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:09.913 16:31:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:09.913 16:31:41 -- common/autotest_common.sh@10 -- # set +x 00:14:09.913 [2024-07-13 16:31:41.323058] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:09.913 [2024-07-13 16:31:41.323464] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.172 [2024-07-13 16:31:41.467485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.172 [2024-07-13 16:31:41.548348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.172 [2024-07-13 16:31:41.625603] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.108 16:31:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:11.108 16:31:42 -- common/autotest_common.sh@852 -- # return 0 00:14:11.108 16:31:42 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:11.108 Base_1 00:14:11.108 16:31:42 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:11.367 Base_2 00:14:11.367 16:31:42 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:11.661 [2024-07-13 16:31:42.922290] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:11.661 [2024-07-13 16:31:42.925178] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:11.661 [2024-07-13 16:31:42.925373] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:11.661 [2024-07-13 16:31:42.925492] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:11.661 [2024-07-13 16:31:42.925759] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001de0 00:14:11.661 [2024-07-13 16:31:42.926266] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:11.661 [2024-07-13 16:31:42.926388] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006080 00:14:11.661 [2024-07-13 16:31:42.926765] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.661 16:31:42 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:11.661 [2024-07-13 16:31:43.110783] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:11.661 [2024-07-13 16:31:43.111007] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:11.661 true 00:14:11.937 16:31:43 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:11.937 16:31:43 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:14:11.937 [2024-07-13 16:31:43.354985] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.937 16:31:43 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:14:11.937 16:31:43 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:14:11.937 16:31:43 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:14:11.937 16:31:43 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:12.195 [2024-07-13 16:31:43.542821] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:12.195 [2024-07-13 16:31:43.543086] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:12.195 [2024-07-13 16:31:43.543219] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:14:12.195 [2024-07-13 16:31:43.543316] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:12.195 true 00:14:12.195 16:31:43 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:12.195 16:31:43 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:14:12.453 [2024-07-13 16:31:43.722974] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.453 16:31:43 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:14:12.453 16:31:43 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:14:12.453 16:31:43 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:14:12.453 16:31:43 -- bdev/bdev_raid.sh@332 -- # killprocess 122806 00:14:12.453 16:31:43 -- common/autotest_common.sh@926 -- # '[' -z 122806 ']' 00:14:12.453 16:31:43 -- common/autotest_common.sh@930 -- # kill -0 122806 00:14:12.453 16:31:43 -- common/autotest_common.sh@931 -- # uname 00:14:12.453 16:31:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:12.453 16:31:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122806 00:14:12.453 killing process with pid 122806 00:14:12.453 16:31:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:12.453 16:31:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:12.453 16:31:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122806' 00:14:12.453 16:31:43 -- common/autotest_common.sh@945 -- # kill 122806 00:14:12.453 [2024-07-13 16:31:43.770267] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.453 16:31:43 -- common/autotest_common.sh@950 -- # wait 122806 00:14:12.453 [2024-07-13 16:31:43.770396] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.453 [2024-07-13 16:31:43.770464] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.453 [2024-07-13 16:31:43.770474] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Raid, state offline 00:14:12.453 [2024-07-13 16:31:43.771046] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.712 ************************************ 00:14:12.712 END TEST raid0_resize_test 00:14:12.712 ************************************ 00:14:12.712 16:31:44 -- bdev/bdev_raid.sh@334 -- # return 0 00:14:12.712 00:14:12.712 real 0m2.898s 00:14:12.712 user 0m4.185s 00:14:12.712 sys 0m0.642s 00:14:12.712 16:31:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.712 16:31:44 -- common/autotest_common.sh@10 -- # set +x 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:12.971 16:31:44 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:12.971 16:31:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:12.971 16:31:44 -- common/autotest_common.sh@10 -- # set +x 00:14:12.971 ************************************ 00:14:12.971 START TEST raid_state_function_test 00:14:12.971 ************************************ 00:14:12.971 16:31:44 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@226 -- # raid_pid=122888 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122888' 00:14:12.971 Process raid pid: 122888 00:14:12.971 16:31:44 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122888 /var/tmp/spdk-raid.sock 00:14:12.971 16:31:44 -- common/autotest_common.sh@819 -- # '[' -z 122888 ']' 00:14:12.971 16:31:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:12.971 16:31:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:12.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:12.971 16:31:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:12.971 16:31:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:12.971 16:31:44 -- common/autotest_common.sh@10 -- # set +x 00:14:12.971 [2024-07-13 16:31:44.298454] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:12.971 [2024-07-13 16:31:44.298837] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.971 [2024-07-13 16:31:44.440976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.230 [2024-07-13 16:31:44.520741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.230 [2024-07-13 16:31:44.598200] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.167 16:31:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:14.167 16:31:45 -- common/autotest_common.sh@852 -- # return 0 00:14:14.167 16:31:45 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:14.167 [2024-07-13 16:31:45.501982] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.167 [2024-07-13 16:31:45.502295] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.167 [2024-07-13 16:31:45.502424] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.167 [2024-07-13 16:31:45.502481] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.167 16:31:45 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:14.167 16:31:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:14.167 16:31:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:14.167 16:31:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:14.167 16:31:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:14.167 16:31:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:14.167 16:31:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:14.167 16:31:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:14.167 16:31:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:14.167 16:31:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:14.167 16:31:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.167 16:31:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.425 16:31:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:14.425 "name": "Existed_Raid", 00:14:14.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.425 "strip_size_kb": 64, 00:14:14.425 "state": "configuring", 00:14:14.425 "raid_level": "raid0", 00:14:14.425 "superblock": false, 00:14:14.425 "num_base_bdevs": 2, 00:14:14.425 "num_base_bdevs_discovered": 0, 00:14:14.425 "num_base_bdevs_operational": 2, 00:14:14.425 "base_bdevs_list": [ 00:14:14.425 { 00:14:14.425 "name": "BaseBdev1", 00:14:14.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.425 "is_configured": false, 00:14:14.425 "data_offset": 0, 00:14:14.425 "data_size": 0 00:14:14.425 }, 00:14:14.425 { 00:14:14.425 "name": "BaseBdev2", 00:14:14.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.425 "is_configured": false, 00:14:14.425 "data_offset": 0, 00:14:14.425 "data_size": 0 00:14:14.425 } 00:14:14.425 ] 00:14:14.425 }' 00:14:14.425 16:31:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:14.425 16:31:45 -- common/autotest_common.sh@10 -- # set +x 00:14:14.992 16:31:46 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:15.251 [2024-07-13 16:31:46.554042] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.251 [2024-07-13 16:31:46.554306] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:15.251 16:31:46 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:15.509 [2024-07-13 16:31:46.830124] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.509 [2024-07-13 16:31:46.830458] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.509 [2024-07-13 16:31:46.830550] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.509 [2024-07-13 16:31:46.830624] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.510 16:31:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:15.769 [2024-07-13 16:31:47.082067] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.769 BaseBdev1 00:14:15.769 16:31:47 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:15.769 16:31:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:15.769 16:31:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:15.769 16:31:47 -- common/autotest_common.sh@889 -- # local i 00:14:15.769 16:31:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:15.769 16:31:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:15.769 16:31:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:16.028 16:31:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:16.028 [ 00:14:16.028 { 00:14:16.028 "name": "BaseBdev1", 00:14:16.028 "aliases": [ 00:14:16.028 "4bf1bee7-2745-411b-875a-9a88f0055486" 00:14:16.028 ], 00:14:16.028 "product_name": "Malloc disk", 00:14:16.028 "block_size": 512, 00:14:16.028 "num_blocks": 65536, 00:14:16.028 "uuid": "4bf1bee7-2745-411b-875a-9a88f0055486", 00:14:16.028 "assigned_rate_limits": { 00:14:16.028 "rw_ios_per_sec": 0, 00:14:16.028 "rw_mbytes_per_sec": 0, 00:14:16.028 "r_mbytes_per_sec": 0, 00:14:16.028 "w_mbytes_per_sec": 0 00:14:16.028 }, 00:14:16.028 "claimed": true, 00:14:16.028 "claim_type": "exclusive_write", 00:14:16.028 "zoned": false, 00:14:16.028 "supported_io_types": { 00:14:16.028 "read": true, 00:14:16.028 "write": true, 00:14:16.028 "unmap": true, 00:14:16.028 "write_zeroes": true, 00:14:16.028 "flush": true, 00:14:16.028 "reset": true, 00:14:16.028 "compare": false, 00:14:16.028 "compare_and_write": false, 00:14:16.028 "abort": true, 00:14:16.028 "nvme_admin": false, 00:14:16.028 "nvme_io": false 00:14:16.028 }, 00:14:16.028 "memory_domains": [ 00:14:16.028 { 00:14:16.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.028 "dma_device_type": 2 00:14:16.028 } 00:14:16.028 ], 00:14:16.028 "driver_specific": {} 00:14:16.028 } 00:14:16.028 ] 00:14:16.028 16:31:47 -- common/autotest_common.sh@895 -- # return 0 00:14:16.028 16:31:47 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:16.028 16:31:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:16.028 16:31:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:16.028 16:31:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:16.028 16:31:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:16.028 16:31:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:16.028 16:31:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:16.028 16:31:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:16.028 16:31:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:16.028 16:31:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:16.028 16:31:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.028 16:31:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.287 16:31:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:16.287 "name": "Existed_Raid", 00:14:16.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.287 "strip_size_kb": 64, 00:14:16.287 "state": "configuring", 00:14:16.287 "raid_level": "raid0", 00:14:16.287 "superblock": false, 00:14:16.287 "num_base_bdevs": 2, 00:14:16.287 "num_base_bdevs_discovered": 1, 00:14:16.287 "num_base_bdevs_operational": 2, 00:14:16.287 "base_bdevs_list": [ 00:14:16.287 { 00:14:16.287 "name": "BaseBdev1", 00:14:16.287 "uuid": "4bf1bee7-2745-411b-875a-9a88f0055486", 00:14:16.287 "is_configured": true, 00:14:16.287 "data_offset": 0, 00:14:16.287 "data_size": 65536 00:14:16.287 }, 00:14:16.287 { 00:14:16.287 "name": "BaseBdev2", 00:14:16.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.287 "is_configured": false, 00:14:16.287 "data_offset": 0, 00:14:16.287 "data_size": 0 00:14:16.287 } 00:14:16.287 ] 00:14:16.287 }' 00:14:16.287 16:31:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:16.287 16:31:47 -- common/autotest_common.sh@10 -- # set +x 00:14:16.855 16:31:48 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:16.855 [2024-07-13 16:31:48.322329] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.855 [2024-07-13 16:31:48.322635] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:17.114 16:31:48 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:17.114 16:31:48 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:17.114 [2024-07-13 16:31:48.570495] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.114 [2024-07-13 16:31:48.573148] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.114 [2024-07-13 16:31:48.573336] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.372 16:31:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:17.372 "name": "Existed_Raid", 00:14:17.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.372 "strip_size_kb": 64, 00:14:17.372 "state": "configuring", 00:14:17.372 "raid_level": "raid0", 00:14:17.372 "superblock": false, 00:14:17.372 "num_base_bdevs": 2, 00:14:17.372 "num_base_bdevs_discovered": 1, 00:14:17.372 "num_base_bdevs_operational": 2, 00:14:17.372 "base_bdevs_list": [ 00:14:17.372 { 00:14:17.372 "name": "BaseBdev1", 00:14:17.372 "uuid": "4bf1bee7-2745-411b-875a-9a88f0055486", 00:14:17.372 "is_configured": true, 00:14:17.372 "data_offset": 0, 00:14:17.373 "data_size": 65536 00:14:17.373 }, 00:14:17.373 { 00:14:17.373 "name": "BaseBdev2", 00:14:17.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.373 "is_configured": false, 00:14:17.373 "data_offset": 0, 00:14:17.373 "data_size": 0 00:14:17.373 } 00:14:17.373 ] 00:14:17.373 }' 00:14:17.373 16:31:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:17.373 16:31:48 -- common/autotest_common.sh@10 -- # set +x 00:14:17.941 16:31:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:18.200 [2024-07-13 16:31:49.493932] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.200 [2024-07-13 16:31:49.494269] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:18.200 [2024-07-13 16:31:49.494329] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:18.200 [2024-07-13 16:31:49.494650] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:18.200 [2024-07-13 16:31:49.495355] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:18.200 [2024-07-13 16:31:49.495506] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:14:18.200 [2024-07-13 16:31:49.495959] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.200 BaseBdev2 00:14:18.200 16:31:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:18.200 16:31:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:18.200 16:31:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:18.200 16:31:49 -- common/autotest_common.sh@889 -- # local i 00:14:18.200 16:31:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:18.200 16:31:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:18.200 16:31:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:18.458 16:31:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:18.717 [ 00:14:18.717 { 00:14:18.717 "name": "BaseBdev2", 00:14:18.717 "aliases": [ 00:14:18.717 "5e8fc21a-577c-4c0c-b9f4-f2c31352fd8c" 00:14:18.717 ], 00:14:18.717 "product_name": "Malloc disk", 00:14:18.717 "block_size": 512, 00:14:18.717 "num_blocks": 65536, 00:14:18.717 "uuid": "5e8fc21a-577c-4c0c-b9f4-f2c31352fd8c", 00:14:18.717 "assigned_rate_limits": { 00:14:18.717 "rw_ios_per_sec": 0, 00:14:18.717 "rw_mbytes_per_sec": 0, 00:14:18.717 "r_mbytes_per_sec": 0, 00:14:18.717 "w_mbytes_per_sec": 0 00:14:18.717 }, 00:14:18.717 "claimed": true, 00:14:18.717 "claim_type": "exclusive_write", 00:14:18.717 "zoned": false, 00:14:18.718 "supported_io_types": { 00:14:18.718 "read": true, 00:14:18.718 "write": true, 00:14:18.718 "unmap": true, 00:14:18.718 "write_zeroes": true, 00:14:18.718 "flush": true, 00:14:18.718 "reset": true, 00:14:18.718 "compare": false, 00:14:18.718 "compare_and_write": false, 00:14:18.718 "abort": true, 00:14:18.718 "nvme_admin": false, 00:14:18.718 "nvme_io": false 00:14:18.718 }, 00:14:18.718 "memory_domains": [ 00:14:18.718 { 00:14:18.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.718 "dma_device_type": 2 00:14:18.718 } 00:14:18.718 ], 00:14:18.718 "driver_specific": {} 00:14:18.718 } 00:14:18.718 ] 00:14:18.718 16:31:49 -- common/autotest_common.sh@895 -- # return 0 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.718 16:31:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.976 16:31:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:18.976 "name": "Existed_Raid", 00:14:18.976 "uuid": "687516df-96bd-4e67-81ae-8fc9a0dbd118", 00:14:18.976 "strip_size_kb": 64, 00:14:18.976 "state": "online", 00:14:18.976 "raid_level": "raid0", 00:14:18.976 "superblock": false, 00:14:18.976 "num_base_bdevs": 2, 00:14:18.976 "num_base_bdevs_discovered": 2, 00:14:18.976 "num_base_bdevs_operational": 2, 00:14:18.976 "base_bdevs_list": [ 00:14:18.976 { 00:14:18.976 "name": "BaseBdev1", 00:14:18.976 "uuid": "4bf1bee7-2745-411b-875a-9a88f0055486", 00:14:18.976 "is_configured": true, 00:14:18.976 "data_offset": 0, 00:14:18.976 "data_size": 65536 00:14:18.976 }, 00:14:18.976 { 00:14:18.976 "name": "BaseBdev2", 00:14:18.976 "uuid": "5e8fc21a-577c-4c0c-b9f4-f2c31352fd8c", 00:14:18.976 "is_configured": true, 00:14:18.976 "data_offset": 0, 00:14:18.976 "data_size": 65536 00:14:18.976 } 00:14:18.976 ] 00:14:18.976 }' 00:14:18.976 16:31:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:18.976 16:31:50 -- common/autotest_common.sh@10 -- # set +x 00:14:19.543 16:31:50 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:19.543 [2024-07-13 16:31:50.974354] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.543 [2024-07-13 16:31:50.974545] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.543 [2024-07-13 16:31:50.974770] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:19.802 "name": "Existed_Raid", 00:14:19.802 "uuid": "687516df-96bd-4e67-81ae-8fc9a0dbd118", 00:14:19.802 "strip_size_kb": 64, 00:14:19.802 "state": "offline", 00:14:19.802 "raid_level": "raid0", 00:14:19.802 "superblock": false, 00:14:19.802 "num_base_bdevs": 2, 00:14:19.802 "num_base_bdevs_discovered": 1, 00:14:19.802 "num_base_bdevs_operational": 1, 00:14:19.802 "base_bdevs_list": [ 00:14:19.802 { 00:14:19.802 "name": null, 00:14:19.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.802 "is_configured": false, 00:14:19.802 "data_offset": 0, 00:14:19.802 "data_size": 65536 00:14:19.802 }, 00:14:19.802 { 00:14:19.802 "name": "BaseBdev2", 00:14:19.802 "uuid": "5e8fc21a-577c-4c0c-b9f4-f2c31352fd8c", 00:14:19.802 "is_configured": true, 00:14:19.802 "data_offset": 0, 00:14:19.802 "data_size": 65536 00:14:19.802 } 00:14:19.802 ] 00:14:19.802 }' 00:14:19.802 16:31:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:19.802 16:31:51 -- common/autotest_common.sh@10 -- # set +x 00:14:20.369 16:31:51 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:20.369 16:31:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:20.369 16:31:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.369 16:31:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:20.628 16:31:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:20.628 16:31:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:20.628 16:31:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:20.886 [2024-07-13 16:31:52.239447] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.886 [2024-07-13 16:31:52.239753] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:14:20.886 16:31:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:20.886 16:31:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:20.886 16:31:52 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.886 16:31:52 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:21.145 16:31:52 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:21.145 16:31:52 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:21.145 16:31:52 -- bdev/bdev_raid.sh@287 -- # killprocess 122888 00:14:21.145 16:31:52 -- common/autotest_common.sh@926 -- # '[' -z 122888 ']' 00:14:21.145 16:31:52 -- common/autotest_common.sh@930 -- # kill -0 122888 00:14:21.145 16:31:52 -- common/autotest_common.sh@931 -- # uname 00:14:21.145 16:31:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:21.145 16:31:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122888 00:14:21.145 16:31:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:21.145 16:31:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:21.145 16:31:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122888' 00:14:21.145 killing process with pid 122888 00:14:21.145 16:31:52 -- common/autotest_common.sh@945 -- # kill 122888 00:14:21.145 [2024-07-13 16:31:52.489649] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.145 16:31:52 -- common/autotest_common.sh@950 -- # wait 122888 00:14:21.145 [2024-07-13 16:31:52.489880] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.711 16:31:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:21.711 00:14:21.711 real 0m8.649s 00:14:21.711 user 0m14.898s 00:14:21.711 sys 0m1.691s 00:14:21.711 16:31:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.711 16:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:21.711 ************************************ 00:14:21.711 END TEST raid_state_function_test 00:14:21.711 ************************************ 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:21.712 16:31:52 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:21.712 16:31:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:21.712 16:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:21.712 ************************************ 00:14:21.712 START TEST raid_state_function_test_sb 00:14:21.712 ************************************ 00:14:21.712 16:31:52 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=123194 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123194' 00:14:21.712 Process raid pid: 123194 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:21.712 16:31:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123194 /var/tmp/spdk-raid.sock 00:14:21.712 16:31:52 -- common/autotest_common.sh@819 -- # '[' -z 123194 ']' 00:14:21.712 16:31:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:21.712 16:31:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.712 16:31:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:21.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:21.712 16:31:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.712 16:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:21.712 [2024-07-13 16:31:53.018492] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:21.712 [2024-07-13 16:31:53.018831] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.712 [2024-07-13 16:31:53.160183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.970 [2024-07-13 16:31:53.235878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.970 [2024-07-13 16:31:53.313550] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.661 16:31:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:22.661 16:31:53 -- common/autotest_common.sh@852 -- # return 0 00:14:22.661 16:31:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:22.661 [2024-07-13 16:31:54.041453] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.661 [2024-07-13 16:31:54.041730] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.661 [2024-07-13 16:31:54.041833] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.661 [2024-07-13 16:31:54.041890] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.661 16:31:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:22.661 16:31:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:22.661 16:31:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:22.661 16:31:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:22.661 16:31:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:22.661 16:31:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:22.661 16:31:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:22.661 16:31:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:22.661 16:31:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:22.661 16:31:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:22.661 16:31:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.661 16:31:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.918 16:31:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:22.918 "name": "Existed_Raid", 00:14:22.918 "uuid": "415a9871-c5ef-43b6-a03d-70d464567ee4", 00:14:22.918 "strip_size_kb": 64, 00:14:22.918 "state": "configuring", 00:14:22.918 "raid_level": "raid0", 00:14:22.918 "superblock": true, 00:14:22.918 "num_base_bdevs": 2, 00:14:22.918 "num_base_bdevs_discovered": 0, 00:14:22.918 "num_base_bdevs_operational": 2, 00:14:22.918 "base_bdevs_list": [ 00:14:22.918 { 00:14:22.918 "name": "BaseBdev1", 00:14:22.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.918 "is_configured": false, 00:14:22.918 "data_offset": 0, 00:14:22.918 "data_size": 0 00:14:22.918 }, 00:14:22.918 { 00:14:22.918 "name": "BaseBdev2", 00:14:22.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.918 "is_configured": false, 00:14:22.918 "data_offset": 0, 00:14:22.918 "data_size": 0 00:14:22.918 } 00:14:22.918 ] 00:14:22.918 }' 00:14:22.918 16:31:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:22.918 16:31:54 -- common/autotest_common.sh@10 -- # set +x 00:14:23.482 16:31:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:23.740 [2024-07-13 16:31:55.021462] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.740 [2024-07-13 16:31:55.021732] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:23.740 16:31:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:23.998 [2024-07-13 16:31:55.213625] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:23.998 [2024-07-13 16:31:55.213946] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:23.998 [2024-07-13 16:31:55.214042] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.998 [2024-07-13 16:31:55.214160] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.998 16:31:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.998 [2024-07-13 16:31:55.421353] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.998 BaseBdev1 00:14:23.998 16:31:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:23.998 16:31:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:23.998 16:31:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:23.998 16:31:55 -- common/autotest_common.sh@889 -- # local i 00:14:23.998 16:31:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:23.998 16:31:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:23.998 16:31:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:24.256 16:31:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:24.512 [ 00:14:24.512 { 00:14:24.512 "name": "BaseBdev1", 00:14:24.512 "aliases": [ 00:14:24.512 "bdd07a0f-00be-468a-a45c-249c39857b94" 00:14:24.512 ], 00:14:24.512 "product_name": "Malloc disk", 00:14:24.512 "block_size": 512, 00:14:24.513 "num_blocks": 65536, 00:14:24.513 "uuid": "bdd07a0f-00be-468a-a45c-249c39857b94", 00:14:24.513 "assigned_rate_limits": { 00:14:24.513 "rw_ios_per_sec": 0, 00:14:24.513 "rw_mbytes_per_sec": 0, 00:14:24.513 "r_mbytes_per_sec": 0, 00:14:24.513 "w_mbytes_per_sec": 0 00:14:24.513 }, 00:14:24.513 "claimed": true, 00:14:24.513 "claim_type": "exclusive_write", 00:14:24.513 "zoned": false, 00:14:24.513 "supported_io_types": { 00:14:24.513 "read": true, 00:14:24.513 "write": true, 00:14:24.513 "unmap": true, 00:14:24.513 "write_zeroes": true, 00:14:24.513 "flush": true, 00:14:24.513 "reset": true, 00:14:24.513 "compare": false, 00:14:24.513 "compare_and_write": false, 00:14:24.513 "abort": true, 00:14:24.513 "nvme_admin": false, 00:14:24.513 "nvme_io": false 00:14:24.513 }, 00:14:24.513 "memory_domains": [ 00:14:24.513 { 00:14:24.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.513 "dma_device_type": 2 00:14:24.513 } 00:14:24.513 ], 00:14:24.513 "driver_specific": {} 00:14:24.513 } 00:14:24.513 ] 00:14:24.513 16:31:55 -- common/autotest_common.sh@895 -- # return 0 00:14:24.513 16:31:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:24.513 16:31:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:24.513 16:31:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:24.513 16:31:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:24.513 16:31:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:24.513 16:31:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:24.513 16:31:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:24.513 16:31:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:24.513 16:31:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:24.513 16:31:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:24.513 16:31:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.513 16:31:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.769 16:31:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:24.769 "name": "Existed_Raid", 00:14:24.769 "uuid": "d851fa4c-e374-4a99-827f-10e29a993348", 00:14:24.769 "strip_size_kb": 64, 00:14:24.769 "state": "configuring", 00:14:24.769 "raid_level": "raid0", 00:14:24.769 "superblock": true, 00:14:24.769 "num_base_bdevs": 2, 00:14:24.769 "num_base_bdevs_discovered": 1, 00:14:24.769 "num_base_bdevs_operational": 2, 00:14:24.769 "base_bdevs_list": [ 00:14:24.769 { 00:14:24.769 "name": "BaseBdev1", 00:14:24.769 "uuid": "bdd07a0f-00be-468a-a45c-249c39857b94", 00:14:24.769 "is_configured": true, 00:14:24.769 "data_offset": 2048, 00:14:24.769 "data_size": 63488 00:14:24.769 }, 00:14:24.769 { 00:14:24.769 "name": "BaseBdev2", 00:14:24.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.769 "is_configured": false, 00:14:24.769 "data_offset": 0, 00:14:24.769 "data_size": 0 00:14:24.769 } 00:14:24.769 ] 00:14:24.769 }' 00:14:24.769 16:31:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:24.769 16:31:56 -- common/autotest_common.sh@10 -- # set +x 00:14:25.334 16:31:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:25.334 [2024-07-13 16:31:56.789684] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.334 [2024-07-13 16:31:56.789968] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:25.592 16:31:56 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:25.592 16:31:56 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:25.849 16:31:57 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:26.107 BaseBdev1 00:14:26.107 16:31:57 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:26.107 16:31:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:26.107 16:31:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:26.107 16:31:57 -- common/autotest_common.sh@889 -- # local i 00:14:26.107 16:31:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:26.107 16:31:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:26.107 16:31:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:26.108 16:31:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:26.365 [ 00:14:26.365 { 00:14:26.365 "name": "BaseBdev1", 00:14:26.365 "aliases": [ 00:14:26.365 "38efc7fa-9cc2-4e6e-a387-ef830bbb1d31" 00:14:26.365 ], 00:14:26.365 "product_name": "Malloc disk", 00:14:26.365 "block_size": 512, 00:14:26.365 "num_blocks": 65536, 00:14:26.365 "uuid": "38efc7fa-9cc2-4e6e-a387-ef830bbb1d31", 00:14:26.365 "assigned_rate_limits": { 00:14:26.365 "rw_ios_per_sec": 0, 00:14:26.365 "rw_mbytes_per_sec": 0, 00:14:26.365 "r_mbytes_per_sec": 0, 00:14:26.365 "w_mbytes_per_sec": 0 00:14:26.365 }, 00:14:26.365 "claimed": false, 00:14:26.365 "zoned": false, 00:14:26.365 "supported_io_types": { 00:14:26.365 "read": true, 00:14:26.365 "write": true, 00:14:26.365 "unmap": true, 00:14:26.365 "write_zeroes": true, 00:14:26.365 "flush": true, 00:14:26.365 "reset": true, 00:14:26.365 "compare": false, 00:14:26.365 "compare_and_write": false, 00:14:26.365 "abort": true, 00:14:26.365 "nvme_admin": false, 00:14:26.365 "nvme_io": false 00:14:26.365 }, 00:14:26.365 "memory_domains": [ 00:14:26.365 { 00:14:26.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.365 "dma_device_type": 2 00:14:26.365 } 00:14:26.365 ], 00:14:26.365 "driver_specific": {} 00:14:26.365 } 00:14:26.365 ] 00:14:26.365 16:31:57 -- common/autotest_common.sh@895 -- # return 0 00:14:26.365 16:31:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:26.622 [2024-07-13 16:31:57.950088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.622 [2024-07-13 16:31:57.952769] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.622 [2024-07-13 16:31:57.952967] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.622 16:31:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.881 16:31:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:26.881 "name": "Existed_Raid", 00:14:26.881 "uuid": "11eeb1c2-af4e-4afb-9192-b7fbebf5568a", 00:14:26.881 "strip_size_kb": 64, 00:14:26.881 "state": "configuring", 00:14:26.881 "raid_level": "raid0", 00:14:26.881 "superblock": true, 00:14:26.881 "num_base_bdevs": 2, 00:14:26.881 "num_base_bdevs_discovered": 1, 00:14:26.881 "num_base_bdevs_operational": 2, 00:14:26.881 "base_bdevs_list": [ 00:14:26.881 { 00:14:26.881 "name": "BaseBdev1", 00:14:26.881 "uuid": "38efc7fa-9cc2-4e6e-a387-ef830bbb1d31", 00:14:26.881 "is_configured": true, 00:14:26.881 "data_offset": 2048, 00:14:26.881 "data_size": 63488 00:14:26.881 }, 00:14:26.881 { 00:14:26.881 "name": "BaseBdev2", 00:14:26.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.881 "is_configured": false, 00:14:26.881 "data_offset": 0, 00:14:26.881 "data_size": 0 00:14:26.881 } 00:14:26.881 ] 00:14:26.881 }' 00:14:26.881 16:31:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:26.881 16:31:58 -- common/autotest_common.sh@10 -- # set +x 00:14:27.450 16:31:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:27.450 [2024-07-13 16:31:58.850001] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.450 [2024-07-13 16:31:58.850580] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:14:27.450 [2024-07-13 16:31:58.850739] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:27.450 [2024-07-13 16:31:58.850996] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:14:27.450 [2024-07-13 16:31:58.851611] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:14:27.450 BaseBdev2 00:14:27.450 [2024-07-13 16:31:58.851782] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:14:27.450 [2024-07-13 16:31:58.852103] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.450 16:31:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:27.450 16:31:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:27.450 16:31:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:27.450 16:31:58 -- common/autotest_common.sh@889 -- # local i 00:14:27.450 16:31:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:27.450 16:31:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:27.450 16:31:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:27.708 16:31:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:27.966 [ 00:14:27.966 { 00:14:27.966 "name": "BaseBdev2", 00:14:27.966 "aliases": [ 00:14:27.966 "00201496-7d56-4cf6-8ed8-c87572b2483b" 00:14:27.966 ], 00:14:27.966 "product_name": "Malloc disk", 00:14:27.966 "block_size": 512, 00:14:27.966 "num_blocks": 65536, 00:14:27.966 "uuid": "00201496-7d56-4cf6-8ed8-c87572b2483b", 00:14:27.966 "assigned_rate_limits": { 00:14:27.966 "rw_ios_per_sec": 0, 00:14:27.966 "rw_mbytes_per_sec": 0, 00:14:27.966 "r_mbytes_per_sec": 0, 00:14:27.966 "w_mbytes_per_sec": 0 00:14:27.966 }, 00:14:27.966 "claimed": true, 00:14:27.966 "claim_type": "exclusive_write", 00:14:27.966 "zoned": false, 00:14:27.966 "supported_io_types": { 00:14:27.966 "read": true, 00:14:27.966 "write": true, 00:14:27.966 "unmap": true, 00:14:27.966 "write_zeroes": true, 00:14:27.966 "flush": true, 00:14:27.966 "reset": true, 00:14:27.966 "compare": false, 00:14:27.966 "compare_and_write": false, 00:14:27.966 "abort": true, 00:14:27.966 "nvme_admin": false, 00:14:27.966 "nvme_io": false 00:14:27.966 }, 00:14:27.966 "memory_domains": [ 00:14:27.966 { 00:14:27.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.966 "dma_device_type": 2 00:14:27.966 } 00:14:27.966 ], 00:14:27.966 "driver_specific": {} 00:14:27.966 } 00:14:27.966 ] 00:14:27.966 16:31:59 -- common/autotest_common.sh@895 -- # return 0 00:14:27.966 16:31:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:27.966 16:31:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:27.966 16:31:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:27.966 16:31:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:27.966 16:31:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:27.966 16:31:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:27.966 16:31:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:27.966 16:31:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:27.966 16:31:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:27.966 16:31:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:27.966 16:31:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:27.966 16:31:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:27.966 16:31:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.967 16:31:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.224 16:31:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:28.224 "name": "Existed_Raid", 00:14:28.224 "uuid": "11eeb1c2-af4e-4afb-9192-b7fbebf5568a", 00:14:28.224 "strip_size_kb": 64, 00:14:28.224 "state": "online", 00:14:28.224 "raid_level": "raid0", 00:14:28.224 "superblock": true, 00:14:28.224 "num_base_bdevs": 2, 00:14:28.224 "num_base_bdevs_discovered": 2, 00:14:28.224 "num_base_bdevs_operational": 2, 00:14:28.224 "base_bdevs_list": [ 00:14:28.224 { 00:14:28.224 "name": "BaseBdev1", 00:14:28.224 "uuid": "38efc7fa-9cc2-4e6e-a387-ef830bbb1d31", 00:14:28.224 "is_configured": true, 00:14:28.224 "data_offset": 2048, 00:14:28.224 "data_size": 63488 00:14:28.224 }, 00:14:28.224 { 00:14:28.224 "name": "BaseBdev2", 00:14:28.224 "uuid": "00201496-7d56-4cf6-8ed8-c87572b2483b", 00:14:28.224 "is_configured": true, 00:14:28.224 "data_offset": 2048, 00:14:28.224 "data_size": 63488 00:14:28.224 } 00:14:28.224 ] 00:14:28.224 }' 00:14:28.224 16:31:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:28.224 16:31:59 -- common/autotest_common.sh@10 -- # set +x 00:14:28.790 16:32:00 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:29.049 [2024-07-13 16:32:00.414434] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:29.049 [2024-07-13 16:32:00.414669] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.049 [2024-07-13 16:32:00.414924] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.049 16:32:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.307 16:32:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:29.307 "name": "Existed_Raid", 00:14:29.307 "uuid": "11eeb1c2-af4e-4afb-9192-b7fbebf5568a", 00:14:29.307 "strip_size_kb": 64, 00:14:29.307 "state": "offline", 00:14:29.307 "raid_level": "raid0", 00:14:29.307 "superblock": true, 00:14:29.307 "num_base_bdevs": 2, 00:14:29.307 "num_base_bdevs_discovered": 1, 00:14:29.307 "num_base_bdevs_operational": 1, 00:14:29.307 "base_bdevs_list": [ 00:14:29.307 { 00:14:29.307 "name": null, 00:14:29.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.307 "is_configured": false, 00:14:29.307 "data_offset": 2048, 00:14:29.307 "data_size": 63488 00:14:29.307 }, 00:14:29.307 { 00:14:29.307 "name": "BaseBdev2", 00:14:29.307 "uuid": "00201496-7d56-4cf6-8ed8-c87572b2483b", 00:14:29.307 "is_configured": true, 00:14:29.307 "data_offset": 2048, 00:14:29.307 "data_size": 63488 00:14:29.307 } 00:14:29.307 ] 00:14:29.307 }' 00:14:29.307 16:32:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:29.307 16:32:00 -- common/autotest_common.sh@10 -- # set +x 00:14:29.874 16:32:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:29.874 16:32:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:29.874 16:32:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:29.874 16:32:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.874 16:32:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:29.874 16:32:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:29.874 16:32:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:30.133 [2024-07-13 16:32:01.530530] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:30.133 [2024-07-13 16:32:01.530851] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:14:30.133 16:32:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:30.133 16:32:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:30.133 16:32:01 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.133 16:32:01 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:30.391 16:32:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:30.391 16:32:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:30.391 16:32:01 -- bdev/bdev_raid.sh@287 -- # killprocess 123194 00:14:30.391 16:32:01 -- common/autotest_common.sh@926 -- # '[' -z 123194 ']' 00:14:30.391 16:32:01 -- common/autotest_common.sh@930 -- # kill -0 123194 00:14:30.391 16:32:01 -- common/autotest_common.sh@931 -- # uname 00:14:30.391 16:32:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:30.391 16:32:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123194 00:14:30.391 16:32:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:30.391 16:32:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:30.391 16:32:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123194' 00:14:30.391 killing process with pid 123194 00:14:30.391 16:32:01 -- common/autotest_common.sh@945 -- # kill 123194 00:14:30.391 16:32:01 -- common/autotest_common.sh@950 -- # wait 123194 00:14:30.391 [2024-07-13 16:32:01.849372] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:30.391 [2024-07-13 16:32:01.849470] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:30.959 00:14:30.959 real 0m9.285s 00:14:30.959 user 0m16.024s 00:14:30.959 sys 0m1.737s 00:14:30.959 16:32:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.959 ************************************ 00:14:30.959 END TEST raid_state_function_test_sb 00:14:30.959 ************************************ 00:14:30.959 16:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:30.959 16:32:02 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:30.959 16:32:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:30.959 16:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:30.959 ************************************ 00:14:30.959 START TEST raid_superblock_test 00:14:30.959 ************************************ 00:14:30.959 16:32:02 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@357 -- # raid_pid=123509 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123509 /var/tmp/spdk-raid.sock 00:14:30.959 16:32:02 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:30.959 16:32:02 -- common/autotest_common.sh@819 -- # '[' -z 123509 ']' 00:14:30.959 16:32:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:30.959 16:32:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:30.959 16:32:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:30.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:30.959 16:32:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:30.959 16:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:30.959 [2024-07-13 16:32:02.366857] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:30.959 [2024-07-13 16:32:02.367270] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123509 ] 00:14:31.217 [2024-07-13 16:32:02.512862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.217 [2024-07-13 16:32:02.590469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.217 [2024-07-13 16:32:02.668455] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.783 16:32:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:31.783 16:32:03 -- common/autotest_common.sh@852 -- # return 0 00:14:31.783 16:32:03 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:31.783 16:32:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:31.783 16:32:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:31.783 16:32:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:31.783 16:32:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:31.783 16:32:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:31.783 16:32:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:31.783 16:32:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:31.783 16:32:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:32.041 malloc1 00:14:32.041 16:32:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:32.315 [2024-07-13 16:32:03.644679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:32.315 [2024-07-13 16:32:03.645056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.315 [2024-07-13 16:32:03.645146] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:14:32.315 [2024-07-13 16:32:03.645286] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.315 [2024-07-13 16:32:03.648537] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.315 [2024-07-13 16:32:03.648713] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:32.315 pt1 00:14:32.315 16:32:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:32.315 16:32:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:32.315 16:32:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:32.315 16:32:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:32.315 16:32:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:32.315 16:32:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:32.315 16:32:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:32.315 16:32:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:32.315 16:32:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:32.574 malloc2 00:14:32.574 16:32:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:32.574 [2024-07-13 16:32:04.028642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:32.574 [2024-07-13 16:32:04.028963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.574 [2024-07-13 16:32:04.029043] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:32.574 [2024-07-13 16:32:04.029173] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.574 [2024-07-13 16:32:04.032108] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.574 [2024-07-13 16:32:04.032295] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:32.574 pt2 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:32.832 [2024-07-13 16:32:04.216813] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:32.832 [2024-07-13 16:32:04.219563] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:32.832 [2024-07-13 16:32:04.219918] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:14:32.832 [2024-07-13 16:32:04.220022] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:32.832 [2024-07-13 16:32:04.220223] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:32.832 [2024-07-13 16:32:04.220845] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:14:32.832 [2024-07-13 16:32:04.220952] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:14:32.832 [2024-07-13 16:32:04.221238] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.832 16:32:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.090 16:32:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:33.090 "name": "raid_bdev1", 00:14:33.090 "uuid": "8bf89c14-4a0c-49b2-afa4-beb3fc7cc5e7", 00:14:33.090 "strip_size_kb": 64, 00:14:33.090 "state": "online", 00:14:33.090 "raid_level": "raid0", 00:14:33.090 "superblock": true, 00:14:33.090 "num_base_bdevs": 2, 00:14:33.090 "num_base_bdevs_discovered": 2, 00:14:33.090 "num_base_bdevs_operational": 2, 00:14:33.090 "base_bdevs_list": [ 00:14:33.090 { 00:14:33.090 "name": "pt1", 00:14:33.090 "uuid": "31efd439-52a2-50fa-a747-c64f2f4a2cc0", 00:14:33.090 "is_configured": true, 00:14:33.090 "data_offset": 2048, 00:14:33.090 "data_size": 63488 00:14:33.090 }, 00:14:33.090 { 00:14:33.090 "name": "pt2", 00:14:33.090 "uuid": "4361f9d4-3cde-5c1a-a455-143411423332", 00:14:33.090 "is_configured": true, 00:14:33.090 "data_offset": 2048, 00:14:33.090 "data_size": 63488 00:14:33.090 } 00:14:33.090 ] 00:14:33.090 }' 00:14:33.090 16:32:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:33.090 16:32:04 -- common/autotest_common.sh@10 -- # set +x 00:14:33.657 16:32:05 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:33.657 16:32:05 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:33.916 [2024-07-13 16:32:05.281618] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.916 16:32:05 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=8bf89c14-4a0c-49b2-afa4-beb3fc7cc5e7 00:14:33.916 16:32:05 -- bdev/bdev_raid.sh@380 -- # '[' -z 8bf89c14-4a0c-49b2-afa4-beb3fc7cc5e7 ']' 00:14:33.916 16:32:05 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:34.175 [2024-07-13 16:32:05.549436] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.175 [2024-07-13 16:32:05.549703] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.175 [2024-07-13 16:32:05.549974] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.175 [2024-07-13 16:32:05.550140] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.175 [2024-07-13 16:32:05.550219] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:14:34.175 16:32:05 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.175 16:32:05 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:34.433 16:32:05 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:34.433 16:32:05 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:34.433 16:32:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:34.433 16:32:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:34.691 16:32:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:34.691 16:32:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:34.949 16:32:06 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:34.949 16:32:06 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:34.949 16:32:06 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:34.949 16:32:06 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:34.949 16:32:06 -- common/autotest_common.sh@640 -- # local es=0 00:14:34.949 16:32:06 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:34.949 16:32:06 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.949 16:32:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:34.949 16:32:06 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.949 16:32:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:34.949 16:32:06 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.949 16:32:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:34.949 16:32:06 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.949 16:32:06 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:34.949 16:32:06 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:35.208 [2024-07-13 16:32:06.646604] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:35.208 [2024-07-13 16:32:06.649295] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:35.208 [2024-07-13 16:32:06.649518] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:35.208 [2024-07-13 16:32:06.649708] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:35.208 [2024-07-13 16:32:06.649837] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.208 [2024-07-13 16:32:06.649876] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:14:35.208 request: 00:14:35.208 { 00:14:35.208 "name": "raid_bdev1", 00:14:35.208 "raid_level": "raid0", 00:14:35.208 "base_bdevs": [ 00:14:35.208 "malloc1", 00:14:35.208 "malloc2" 00:14:35.208 ], 00:14:35.208 "superblock": false, 00:14:35.208 "strip_size_kb": 64, 00:14:35.208 "method": "bdev_raid_create", 00:14:35.208 "req_id": 1 00:14:35.208 } 00:14:35.208 Got JSON-RPC error response 00:14:35.208 response: 00:14:35.208 { 00:14:35.208 "code": -17, 00:14:35.208 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:35.208 } 00:14:35.208 16:32:06 -- common/autotest_common.sh@643 -- # es=1 00:14:35.208 16:32:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:35.208 16:32:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:35.208 16:32:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:35.208 16:32:06 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.208 16:32:06 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:35.466 16:32:06 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:35.466 16:32:06 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:35.466 16:32:06 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:35.725 [2024-07-13 16:32:07.022672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:35.725 [2024-07-13 16:32:07.023029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.725 [2024-07-13 16:32:07.023125] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:35.725 [2024-07-13 16:32:07.023219] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.725 [2024-07-13 16:32:07.026012] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.725 [2024-07-13 16:32:07.026166] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:35.725 [2024-07-13 16:32:07.026357] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:35.725 [2024-07-13 16:32:07.026504] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:35.725 pt1 00:14:35.725 16:32:07 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:35.725 16:32:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:35.725 16:32:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:35.725 16:32:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:35.725 16:32:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:35.725 16:32:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:35.725 16:32:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:35.725 16:32:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:35.725 16:32:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:35.725 16:32:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:35.725 16:32:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.725 16:32:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.984 16:32:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:35.984 "name": "raid_bdev1", 00:14:35.984 "uuid": "8bf89c14-4a0c-49b2-afa4-beb3fc7cc5e7", 00:14:35.984 "strip_size_kb": 64, 00:14:35.984 "state": "configuring", 00:14:35.984 "raid_level": "raid0", 00:14:35.984 "superblock": true, 00:14:35.984 "num_base_bdevs": 2, 00:14:35.984 "num_base_bdevs_discovered": 1, 00:14:35.984 "num_base_bdevs_operational": 2, 00:14:35.984 "base_bdevs_list": [ 00:14:35.984 { 00:14:35.984 "name": "pt1", 00:14:35.984 "uuid": "31efd439-52a2-50fa-a747-c64f2f4a2cc0", 00:14:35.984 "is_configured": true, 00:14:35.984 "data_offset": 2048, 00:14:35.984 "data_size": 63488 00:14:35.984 }, 00:14:35.984 { 00:14:35.984 "name": null, 00:14:35.984 "uuid": "4361f9d4-3cde-5c1a-a455-143411423332", 00:14:35.984 "is_configured": false, 00:14:35.984 "data_offset": 2048, 00:14:35.984 "data_size": 63488 00:14:35.984 } 00:14:35.984 ] 00:14:35.984 }' 00:14:35.984 16:32:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:35.984 16:32:07 -- common/autotest_common.sh@10 -- # set +x 00:14:36.549 16:32:07 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:36.549 16:32:07 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:36.549 16:32:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:36.549 16:32:07 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:36.549 [2024-07-13 16:32:07.998952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:36.549 [2024-07-13 16:32:07.999328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.549 [2024-07-13 16:32:07.999413] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:14:36.549 [2024-07-13 16:32:07.999521] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.549 [2024-07-13 16:32:08.000075] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.549 [2024-07-13 16:32:08.000235] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:36.549 [2024-07-13 16:32:08.000457] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:36.549 [2024-07-13 16:32:08.000581] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:36.549 [2024-07-13 16:32:08.000745] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:14:36.549 [2024-07-13 16:32:08.000919] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:36.549 [2024-07-13 16:32:08.001055] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:36.549 [2024-07-13 16:32:08.001611] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:14:36.549 [2024-07-13 16:32:08.001728] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:14:36.549 [2024-07-13 16:32:08.001918] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.549 pt2 00:14:36.549 16:32:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:36.807 "name": "raid_bdev1", 00:14:36.807 "uuid": "8bf89c14-4a0c-49b2-afa4-beb3fc7cc5e7", 00:14:36.807 "strip_size_kb": 64, 00:14:36.807 "state": "online", 00:14:36.807 "raid_level": "raid0", 00:14:36.807 "superblock": true, 00:14:36.807 "num_base_bdevs": 2, 00:14:36.807 "num_base_bdevs_discovered": 2, 00:14:36.807 "num_base_bdevs_operational": 2, 00:14:36.807 "base_bdevs_list": [ 00:14:36.807 { 00:14:36.807 "name": "pt1", 00:14:36.807 "uuid": "31efd439-52a2-50fa-a747-c64f2f4a2cc0", 00:14:36.807 "is_configured": true, 00:14:36.807 "data_offset": 2048, 00:14:36.807 "data_size": 63488 00:14:36.807 }, 00:14:36.807 { 00:14:36.807 "name": "pt2", 00:14:36.807 "uuid": "4361f9d4-3cde-5c1a-a455-143411423332", 00:14:36.807 "is_configured": true, 00:14:36.807 "data_offset": 2048, 00:14:36.807 "data_size": 63488 00:14:36.807 } 00:14:36.807 ] 00:14:36.807 }' 00:14:36.807 16:32:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:36.807 16:32:08 -- common/autotest_common.sh@10 -- # set +x 00:14:37.374 16:32:08 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:37.374 16:32:08 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:37.631 [2024-07-13 16:32:08.995336] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.631 16:32:09 -- bdev/bdev_raid.sh@430 -- # '[' 8bf89c14-4a0c-49b2-afa4-beb3fc7cc5e7 '!=' 8bf89c14-4a0c-49b2-afa4-beb3fc7cc5e7 ']' 00:14:37.631 16:32:09 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:14:37.631 16:32:09 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:37.631 16:32:09 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:37.631 16:32:09 -- bdev/bdev_raid.sh@511 -- # killprocess 123509 00:14:37.631 16:32:09 -- common/autotest_common.sh@926 -- # '[' -z 123509 ']' 00:14:37.631 16:32:09 -- common/autotest_common.sh@930 -- # kill -0 123509 00:14:37.631 16:32:09 -- common/autotest_common.sh@931 -- # uname 00:14:37.631 16:32:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:37.631 16:32:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123509 00:14:37.631 killing process with pid 123509 00:14:37.631 16:32:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:37.631 16:32:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:37.631 16:32:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123509' 00:14:37.631 16:32:09 -- common/autotest_common.sh@945 -- # kill 123509 00:14:37.631 16:32:09 -- common/autotest_common.sh@950 -- # wait 123509 00:14:37.631 [2024-07-13 16:32:09.045115] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.631 [2024-07-13 16:32:09.045214] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.631 [2024-07-13 16:32:09.045268] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.631 [2024-07-13 16:32:09.045276] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:14:37.631 [2024-07-13 16:32:09.084743] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:38.199 00:14:38.199 real 0m7.159s 00:14:38.199 user 0m12.181s 00:14:38.199 sys 0m1.425s 00:14:38.199 16:32:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.199 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:14:38.199 ************************************ 00:14:38.199 END TEST raid_superblock_test 00:14:38.199 ************************************ 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:38.199 16:32:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:38.199 16:32:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:38.199 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:14:38.199 ************************************ 00:14:38.199 START TEST raid_state_function_test 00:14:38.199 ************************************ 00:14:38.199 16:32:09 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:38.199 16:32:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:38.200 16:32:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:38.200 16:32:09 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:38.200 16:32:09 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:38.200 16:32:09 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:38.200 16:32:09 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:38.200 16:32:09 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:38.200 16:32:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=123742 00:14:38.200 16:32:09 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:38.200 16:32:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123742' 00:14:38.200 Process raid pid: 123742 00:14:38.200 16:32:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123742 /var/tmp/spdk-raid.sock 00:14:38.200 16:32:09 -- common/autotest_common.sh@819 -- # '[' -z 123742 ']' 00:14:38.200 16:32:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:38.200 16:32:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:38.200 16:32:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:38.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:38.200 16:32:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:38.200 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:14:38.200 [2024-07-13 16:32:09.607897] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:38.200 [2024-07-13 16:32:09.608443] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.459 [2024-07-13 16:32:09.765370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.459 [2024-07-13 16:32:09.846220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.459 [2024-07-13 16:32:09.924044] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.026 16:32:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:39.026 16:32:10 -- common/autotest_common.sh@852 -- # return 0 00:14:39.026 16:32:10 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:39.285 [2024-07-13 16:32:10.712015] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:39.285 [2024-07-13 16:32:10.712357] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:39.285 [2024-07-13 16:32:10.712453] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:39.285 [2024-07-13 16:32:10.712507] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:39.285 16:32:10 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:39.285 16:32:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:39.285 16:32:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:39.285 16:32:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:39.285 16:32:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:39.285 16:32:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:39.285 16:32:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:39.285 16:32:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:39.285 16:32:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:39.285 16:32:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:39.285 16:32:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.285 16:32:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.544 16:32:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:39.544 "name": "Existed_Raid", 00:14:39.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.544 "strip_size_kb": 64, 00:14:39.544 "state": "configuring", 00:14:39.544 "raid_level": "concat", 00:14:39.544 "superblock": false, 00:14:39.544 "num_base_bdevs": 2, 00:14:39.544 "num_base_bdevs_discovered": 0, 00:14:39.544 "num_base_bdevs_operational": 2, 00:14:39.544 "base_bdevs_list": [ 00:14:39.544 { 00:14:39.544 "name": "BaseBdev1", 00:14:39.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.544 "is_configured": false, 00:14:39.544 "data_offset": 0, 00:14:39.544 "data_size": 0 00:14:39.544 }, 00:14:39.544 { 00:14:39.544 "name": "BaseBdev2", 00:14:39.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.544 "is_configured": false, 00:14:39.544 "data_offset": 0, 00:14:39.544 "data_size": 0 00:14:39.544 } 00:14:39.544 ] 00:14:39.544 }' 00:14:39.544 16:32:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:39.544 16:32:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.111 16:32:11 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:40.371 [2024-07-13 16:32:11.704047] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:40.371 [2024-07-13 16:32:11.704328] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:40.371 16:32:11 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:40.630 [2024-07-13 16:32:11.944142] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.630 [2024-07-13 16:32:11.944408] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.630 [2024-07-13 16:32:11.944506] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.630 [2024-07-13 16:32:11.944565] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.630 16:32:11 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:40.889 [2024-07-13 16:32:12.143939] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.889 BaseBdev1 00:14:40.889 16:32:12 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:40.889 16:32:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:40.889 16:32:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:40.889 16:32:12 -- common/autotest_common.sh@889 -- # local i 00:14:40.889 16:32:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:40.889 16:32:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:40.889 16:32:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:41.148 16:32:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:41.148 [ 00:14:41.149 { 00:14:41.149 "name": "BaseBdev1", 00:14:41.149 "aliases": [ 00:14:41.149 "7b8c0f9d-097c-42db-8aec-01cf8131b833" 00:14:41.149 ], 00:14:41.149 "product_name": "Malloc disk", 00:14:41.149 "block_size": 512, 00:14:41.149 "num_blocks": 65536, 00:14:41.149 "uuid": "7b8c0f9d-097c-42db-8aec-01cf8131b833", 00:14:41.149 "assigned_rate_limits": { 00:14:41.149 "rw_ios_per_sec": 0, 00:14:41.149 "rw_mbytes_per_sec": 0, 00:14:41.149 "r_mbytes_per_sec": 0, 00:14:41.149 "w_mbytes_per_sec": 0 00:14:41.149 }, 00:14:41.149 "claimed": true, 00:14:41.149 "claim_type": "exclusive_write", 00:14:41.149 "zoned": false, 00:14:41.149 "supported_io_types": { 00:14:41.149 "read": true, 00:14:41.149 "write": true, 00:14:41.149 "unmap": true, 00:14:41.149 "write_zeroes": true, 00:14:41.149 "flush": true, 00:14:41.149 "reset": true, 00:14:41.149 "compare": false, 00:14:41.149 "compare_and_write": false, 00:14:41.149 "abort": true, 00:14:41.149 "nvme_admin": false, 00:14:41.149 "nvme_io": false 00:14:41.149 }, 00:14:41.149 "memory_domains": [ 00:14:41.149 { 00:14:41.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.149 "dma_device_type": 2 00:14:41.149 } 00:14:41.149 ], 00:14:41.149 "driver_specific": {} 00:14:41.149 } 00:14:41.149 ] 00:14:41.149 16:32:12 -- common/autotest_common.sh@895 -- # return 0 00:14:41.149 16:32:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:41.149 16:32:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:41.149 16:32:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:41.149 16:32:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:41.149 16:32:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:41.149 16:32:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:41.149 16:32:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:41.149 16:32:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:41.149 16:32:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:41.149 16:32:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:41.149 16:32:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.149 16:32:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.408 16:32:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:41.408 "name": "Existed_Raid", 00:14:41.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.408 "strip_size_kb": 64, 00:14:41.408 "state": "configuring", 00:14:41.408 "raid_level": "concat", 00:14:41.408 "superblock": false, 00:14:41.408 "num_base_bdevs": 2, 00:14:41.408 "num_base_bdevs_discovered": 1, 00:14:41.408 "num_base_bdevs_operational": 2, 00:14:41.408 "base_bdevs_list": [ 00:14:41.408 { 00:14:41.408 "name": "BaseBdev1", 00:14:41.408 "uuid": "7b8c0f9d-097c-42db-8aec-01cf8131b833", 00:14:41.408 "is_configured": true, 00:14:41.408 "data_offset": 0, 00:14:41.408 "data_size": 65536 00:14:41.408 }, 00:14:41.408 { 00:14:41.408 "name": "BaseBdev2", 00:14:41.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.408 "is_configured": false, 00:14:41.408 "data_offset": 0, 00:14:41.408 "data_size": 0 00:14:41.408 } 00:14:41.408 ] 00:14:41.408 }' 00:14:41.408 16:32:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:41.408 16:32:12 -- common/autotest_common.sh@10 -- # set +x 00:14:41.971 16:32:13 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:42.229 [2024-07-13 16:32:13.600250] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.229 [2024-07-13 16:32:13.600530] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:42.230 16:32:13 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:42.230 16:32:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:42.488 [2024-07-13 16:32:13.788391] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.488 [2024-07-13 16:32:13.791050] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.488 [2024-07-13 16:32:13.791241] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.488 16:32:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.747 16:32:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:42.747 "name": "Existed_Raid", 00:14:42.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.747 "strip_size_kb": 64, 00:14:42.747 "state": "configuring", 00:14:42.747 "raid_level": "concat", 00:14:42.747 "superblock": false, 00:14:42.747 "num_base_bdevs": 2, 00:14:42.747 "num_base_bdevs_discovered": 1, 00:14:42.747 "num_base_bdevs_operational": 2, 00:14:42.747 "base_bdevs_list": [ 00:14:42.747 { 00:14:42.747 "name": "BaseBdev1", 00:14:42.747 "uuid": "7b8c0f9d-097c-42db-8aec-01cf8131b833", 00:14:42.747 "is_configured": true, 00:14:42.747 "data_offset": 0, 00:14:42.747 "data_size": 65536 00:14:42.747 }, 00:14:42.747 { 00:14:42.747 "name": "BaseBdev2", 00:14:42.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.747 "is_configured": false, 00:14:42.747 "data_offset": 0, 00:14:42.747 "data_size": 0 00:14:42.747 } 00:14:42.747 ] 00:14:42.747 }' 00:14:42.747 16:32:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:42.747 16:32:13 -- common/autotest_common.sh@10 -- # set +x 00:14:43.314 16:32:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:43.314 [2024-07-13 16:32:14.758010] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.314 [2024-07-13 16:32:14.758372] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:43.314 [2024-07-13 16:32:14.758436] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:43.314 [2024-07-13 16:32:14.758787] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:43.314 [2024-07-13 16:32:14.759544] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:43.314 [2024-07-13 16:32:14.759698] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:14:43.314 [2024-07-13 16:32:14.760188] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.314 BaseBdev2 00:14:43.314 16:32:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:43.314 16:32:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:43.314 16:32:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:43.314 16:32:14 -- common/autotest_common.sh@889 -- # local i 00:14:43.314 16:32:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:43.314 16:32:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:43.314 16:32:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:43.572 16:32:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.830 [ 00:14:43.830 { 00:14:43.830 "name": "BaseBdev2", 00:14:43.830 "aliases": [ 00:14:43.830 "df94cba5-6ec5-4bb4-8083-7d7e97cd04f1" 00:14:43.830 ], 00:14:43.830 "product_name": "Malloc disk", 00:14:43.830 "block_size": 512, 00:14:43.830 "num_blocks": 65536, 00:14:43.830 "uuid": "df94cba5-6ec5-4bb4-8083-7d7e97cd04f1", 00:14:43.830 "assigned_rate_limits": { 00:14:43.830 "rw_ios_per_sec": 0, 00:14:43.830 "rw_mbytes_per_sec": 0, 00:14:43.830 "r_mbytes_per_sec": 0, 00:14:43.830 "w_mbytes_per_sec": 0 00:14:43.830 }, 00:14:43.830 "claimed": true, 00:14:43.830 "claim_type": "exclusive_write", 00:14:43.830 "zoned": false, 00:14:43.830 "supported_io_types": { 00:14:43.830 "read": true, 00:14:43.830 "write": true, 00:14:43.830 "unmap": true, 00:14:43.830 "write_zeroes": true, 00:14:43.830 "flush": true, 00:14:43.830 "reset": true, 00:14:43.830 "compare": false, 00:14:43.830 "compare_and_write": false, 00:14:43.830 "abort": true, 00:14:43.830 "nvme_admin": false, 00:14:43.830 "nvme_io": false 00:14:43.830 }, 00:14:43.830 "memory_domains": [ 00:14:43.830 { 00:14:43.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.830 "dma_device_type": 2 00:14:43.830 } 00:14:43.830 ], 00:14:43.830 "driver_specific": {} 00:14:43.830 } 00:14:43.830 ] 00:14:43.830 16:32:15 -- common/autotest_common.sh@895 -- # return 0 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.830 16:32:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.089 16:32:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:44.089 "name": "Existed_Raid", 00:14:44.089 "uuid": "460c5d33-bf0a-45a6-bce9-295d49da2eba", 00:14:44.089 "strip_size_kb": 64, 00:14:44.089 "state": "online", 00:14:44.089 "raid_level": "concat", 00:14:44.089 "superblock": false, 00:14:44.089 "num_base_bdevs": 2, 00:14:44.089 "num_base_bdevs_discovered": 2, 00:14:44.089 "num_base_bdevs_operational": 2, 00:14:44.089 "base_bdevs_list": [ 00:14:44.089 { 00:14:44.089 "name": "BaseBdev1", 00:14:44.089 "uuid": "7b8c0f9d-097c-42db-8aec-01cf8131b833", 00:14:44.089 "is_configured": true, 00:14:44.089 "data_offset": 0, 00:14:44.089 "data_size": 65536 00:14:44.089 }, 00:14:44.089 { 00:14:44.089 "name": "BaseBdev2", 00:14:44.089 "uuid": "df94cba5-6ec5-4bb4-8083-7d7e97cd04f1", 00:14:44.089 "is_configured": true, 00:14:44.089 "data_offset": 0, 00:14:44.089 "data_size": 65536 00:14:44.089 } 00:14:44.089 ] 00:14:44.089 }' 00:14:44.089 16:32:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:44.089 16:32:15 -- common/autotest_common.sh@10 -- # set +x 00:14:44.657 16:32:15 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:44.657 [2024-07-13 16:32:16.080941] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:44.657 [2024-07-13 16:32:16.081396] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.657 [2024-07-13 16:32:16.081744] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:44.657 16:32:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:44.916 16:32:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.916 16:32:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.175 16:32:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:45.175 "name": "Existed_Raid", 00:14:45.175 "uuid": "460c5d33-bf0a-45a6-bce9-295d49da2eba", 00:14:45.175 "strip_size_kb": 64, 00:14:45.175 "state": "offline", 00:14:45.175 "raid_level": "concat", 00:14:45.175 "superblock": false, 00:14:45.175 "num_base_bdevs": 2, 00:14:45.175 "num_base_bdevs_discovered": 1, 00:14:45.175 "num_base_bdevs_operational": 1, 00:14:45.175 "base_bdevs_list": [ 00:14:45.175 { 00:14:45.175 "name": null, 00:14:45.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.175 "is_configured": false, 00:14:45.175 "data_offset": 0, 00:14:45.175 "data_size": 65536 00:14:45.175 }, 00:14:45.175 { 00:14:45.175 "name": "BaseBdev2", 00:14:45.175 "uuid": "df94cba5-6ec5-4bb4-8083-7d7e97cd04f1", 00:14:45.175 "is_configured": true, 00:14:45.175 "data_offset": 0, 00:14:45.175 "data_size": 65536 00:14:45.175 } 00:14:45.175 ] 00:14:45.175 }' 00:14:45.175 16:32:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:45.175 16:32:16 -- common/autotest_common.sh@10 -- # set +x 00:14:45.743 16:32:16 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:45.743 16:32:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:45.743 16:32:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.743 16:32:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:45.743 16:32:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:45.743 16:32:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:45.743 16:32:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:46.001 [2024-07-13 16:32:17.334700] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:46.001 [2024-07-13 16:32:17.335252] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:14:46.001 16:32:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:46.001 16:32:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:46.001 16:32:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.001 16:32:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:46.261 16:32:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:46.261 16:32:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:46.261 16:32:17 -- bdev/bdev_raid.sh@287 -- # killprocess 123742 00:14:46.261 16:32:17 -- common/autotest_common.sh@926 -- # '[' -z 123742 ']' 00:14:46.261 16:32:17 -- common/autotest_common.sh@930 -- # kill -0 123742 00:14:46.261 16:32:17 -- common/autotest_common.sh@931 -- # uname 00:14:46.261 16:32:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:46.261 16:32:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123742 00:14:46.261 killing process with pid 123742 00:14:46.261 16:32:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:46.261 16:32:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:46.261 16:32:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123742' 00:14:46.261 16:32:17 -- common/autotest_common.sh@945 -- # kill 123742 00:14:46.261 16:32:17 -- common/autotest_common.sh@950 -- # wait 123742 00:14:46.261 [2024-07-13 16:32:17.602678] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.261 [2024-07-13 16:32:17.602976] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.828 ************************************ 00:14:46.828 END TEST raid_state_function_test 00:14:46.828 ************************************ 00:14:46.828 16:32:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:46.828 00:14:46.828 real 0m8.465s 00:14:46.828 user 0m14.581s 00:14:46.828 sys 0m1.571s 00:14:46.828 16:32:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.828 16:32:17 -- common/autotest_common.sh@10 -- # set +x 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:46.828 16:32:18 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:46.828 16:32:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:46.828 16:32:18 -- common/autotest_common.sh@10 -- # set +x 00:14:46.828 ************************************ 00:14:46.828 START TEST raid_state_function_test_sb 00:14:46.828 ************************************ 00:14:46.828 16:32:18 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@226 -- # raid_pid=124044 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:46.828 Process raid pid: 124044 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124044' 00:14:46.828 16:32:18 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124044 /var/tmp/spdk-raid.sock 00:14:46.828 16:32:18 -- common/autotest_common.sh@819 -- # '[' -z 124044 ']' 00:14:46.828 16:32:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:46.829 16:32:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:46.829 16:32:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:46.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:46.829 16:32:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:46.829 16:32:18 -- common/autotest_common.sh@10 -- # set +x 00:14:46.829 [2024-07-13 16:32:18.134299] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:46.829 [2024-07-13 16:32:18.135184] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.829 [2024-07-13 16:32:18.285807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.087 [2024-07-13 16:32:18.376048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.087 [2024-07-13 16:32:18.460815] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.653 16:32:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:47.653 16:32:19 -- common/autotest_common.sh@852 -- # return 0 00:14:47.653 16:32:19 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:47.911 [2024-07-13 16:32:19.330534] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:47.911 [2024-07-13 16:32:19.330884] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:47.911 [2024-07-13 16:32:19.330985] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.911 [2024-07-13 16:32:19.331037] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.911 16:32:19 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:47.911 16:32:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:47.912 16:32:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:47.912 16:32:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:47.912 16:32:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:47.912 16:32:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:47.912 16:32:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:47.912 16:32:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:47.912 16:32:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:47.912 16:32:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:47.912 16:32:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.912 16:32:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.169 16:32:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:48.169 "name": "Existed_Raid", 00:14:48.169 "uuid": "b75c24ba-8606-4369-b6b7-6d48d22fe06e", 00:14:48.169 "strip_size_kb": 64, 00:14:48.169 "state": "configuring", 00:14:48.169 "raid_level": "concat", 00:14:48.169 "superblock": true, 00:14:48.169 "num_base_bdevs": 2, 00:14:48.169 "num_base_bdevs_discovered": 0, 00:14:48.169 "num_base_bdevs_operational": 2, 00:14:48.169 "base_bdevs_list": [ 00:14:48.169 { 00:14:48.169 "name": "BaseBdev1", 00:14:48.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.169 "is_configured": false, 00:14:48.169 "data_offset": 0, 00:14:48.169 "data_size": 0 00:14:48.169 }, 00:14:48.169 { 00:14:48.169 "name": "BaseBdev2", 00:14:48.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.169 "is_configured": false, 00:14:48.169 "data_offset": 0, 00:14:48.169 "data_size": 0 00:14:48.169 } 00:14:48.169 ] 00:14:48.169 }' 00:14:48.169 16:32:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:48.169 16:32:19 -- common/autotest_common.sh@10 -- # set +x 00:14:48.736 16:32:20 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:49.013 [2024-07-13 16:32:20.238555] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.013 [2024-07-13 16:32:20.238814] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:49.013 16:32:20 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:49.013 [2024-07-13 16:32:20.426650] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.013 [2024-07-13 16:32:20.426977] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.013 [2024-07-13 16:32:20.427062] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.013 [2024-07-13 16:32:20.427124] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.013 16:32:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:49.279 [2024-07-13 16:32:20.630399] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.279 BaseBdev1 00:14:49.279 16:32:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:49.279 16:32:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:49.279 16:32:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:49.279 16:32:20 -- common/autotest_common.sh@889 -- # local i 00:14:49.279 16:32:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:49.279 16:32:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:49.279 16:32:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.538 16:32:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.797 [ 00:14:49.797 { 00:14:49.797 "name": "BaseBdev1", 00:14:49.797 "aliases": [ 00:14:49.797 "b4049baa-56f0-4066-a350-e2510bd5d9ff" 00:14:49.797 ], 00:14:49.797 "product_name": "Malloc disk", 00:14:49.797 "block_size": 512, 00:14:49.797 "num_blocks": 65536, 00:14:49.797 "uuid": "b4049baa-56f0-4066-a350-e2510bd5d9ff", 00:14:49.797 "assigned_rate_limits": { 00:14:49.797 "rw_ios_per_sec": 0, 00:14:49.797 "rw_mbytes_per_sec": 0, 00:14:49.797 "r_mbytes_per_sec": 0, 00:14:49.797 "w_mbytes_per_sec": 0 00:14:49.797 }, 00:14:49.797 "claimed": true, 00:14:49.797 "claim_type": "exclusive_write", 00:14:49.797 "zoned": false, 00:14:49.797 "supported_io_types": { 00:14:49.797 "read": true, 00:14:49.797 "write": true, 00:14:49.797 "unmap": true, 00:14:49.797 "write_zeroes": true, 00:14:49.797 "flush": true, 00:14:49.797 "reset": true, 00:14:49.797 "compare": false, 00:14:49.797 "compare_and_write": false, 00:14:49.797 "abort": true, 00:14:49.797 "nvme_admin": false, 00:14:49.797 "nvme_io": false 00:14:49.797 }, 00:14:49.797 "memory_domains": [ 00:14:49.797 { 00:14:49.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.797 "dma_device_type": 2 00:14:49.797 } 00:14:49.797 ], 00:14:49.797 "driver_specific": {} 00:14:49.797 } 00:14:49.797 ] 00:14:49.797 16:32:21 -- common/autotest_common.sh@895 -- # return 0 00:14:49.797 16:32:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:49.797 16:32:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:49.797 16:32:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:49.797 16:32:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:49.797 16:32:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:49.797 16:32:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:49.797 16:32:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.797 16:32:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.797 16:32:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.797 16:32:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.797 16:32:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.797 16:32:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.056 16:32:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:50.056 "name": "Existed_Raid", 00:14:50.056 "uuid": "7853fac7-108b-4ff0-96d2-1a3caac14662", 00:14:50.056 "strip_size_kb": 64, 00:14:50.056 "state": "configuring", 00:14:50.056 "raid_level": "concat", 00:14:50.056 "superblock": true, 00:14:50.056 "num_base_bdevs": 2, 00:14:50.056 "num_base_bdevs_discovered": 1, 00:14:50.056 "num_base_bdevs_operational": 2, 00:14:50.057 "base_bdevs_list": [ 00:14:50.057 { 00:14:50.057 "name": "BaseBdev1", 00:14:50.057 "uuid": "b4049baa-56f0-4066-a350-e2510bd5d9ff", 00:14:50.057 "is_configured": true, 00:14:50.057 "data_offset": 2048, 00:14:50.057 "data_size": 63488 00:14:50.057 }, 00:14:50.057 { 00:14:50.057 "name": "BaseBdev2", 00:14:50.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.057 "is_configured": false, 00:14:50.057 "data_offset": 0, 00:14:50.057 "data_size": 0 00:14:50.057 } 00:14:50.057 ] 00:14:50.057 }' 00:14:50.057 16:32:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:50.057 16:32:21 -- common/autotest_common.sh@10 -- # set +x 00:14:50.636 16:32:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:50.636 [2024-07-13 16:32:22.070717] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.636 [2024-07-13 16:32:22.070945] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:50.636 16:32:22 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:50.636 16:32:22 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:50.895 16:32:22 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:51.153 BaseBdev1 00:14:51.153 16:32:22 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:51.153 16:32:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:51.153 16:32:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:51.153 16:32:22 -- common/autotest_common.sh@889 -- # local i 00:14:51.153 16:32:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:51.153 16:32:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:51.153 16:32:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:51.412 16:32:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.671 [ 00:14:51.671 { 00:14:51.671 "name": "BaseBdev1", 00:14:51.671 "aliases": [ 00:14:51.671 "137283a6-ff9b-49d8-98b8-97b0c6e49dd8" 00:14:51.671 ], 00:14:51.671 "product_name": "Malloc disk", 00:14:51.671 "block_size": 512, 00:14:51.671 "num_blocks": 65536, 00:14:51.671 "uuid": "137283a6-ff9b-49d8-98b8-97b0c6e49dd8", 00:14:51.671 "assigned_rate_limits": { 00:14:51.671 "rw_ios_per_sec": 0, 00:14:51.671 "rw_mbytes_per_sec": 0, 00:14:51.671 "r_mbytes_per_sec": 0, 00:14:51.671 "w_mbytes_per_sec": 0 00:14:51.671 }, 00:14:51.671 "claimed": false, 00:14:51.671 "zoned": false, 00:14:51.671 "supported_io_types": { 00:14:51.671 "read": true, 00:14:51.671 "write": true, 00:14:51.671 "unmap": true, 00:14:51.671 "write_zeroes": true, 00:14:51.671 "flush": true, 00:14:51.671 "reset": true, 00:14:51.671 "compare": false, 00:14:51.671 "compare_and_write": false, 00:14:51.671 "abort": true, 00:14:51.671 "nvme_admin": false, 00:14:51.671 "nvme_io": false 00:14:51.671 }, 00:14:51.671 "memory_domains": [ 00:14:51.671 { 00:14:51.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.671 "dma_device_type": 2 00:14:51.671 } 00:14:51.671 ], 00:14:51.671 "driver_specific": {} 00:14:51.671 } 00:14:51.671 ] 00:14:51.671 16:32:22 -- common/autotest_common.sh@895 -- # return 0 00:14:51.671 16:32:22 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:51.928 [2024-07-13 16:32:23.143085] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.928 [2024-07-13 16:32:23.145729] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.928 [2024-07-13 16:32:23.145899] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.928 16:32:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:51.928 "name": "Existed_Raid", 00:14:51.928 "uuid": "55467dae-bbc4-4bf5-b801-0b3cb195f507", 00:14:51.928 "strip_size_kb": 64, 00:14:51.928 "state": "configuring", 00:14:51.928 "raid_level": "concat", 00:14:51.928 "superblock": true, 00:14:51.928 "num_base_bdevs": 2, 00:14:51.928 "num_base_bdevs_discovered": 1, 00:14:51.928 "num_base_bdevs_operational": 2, 00:14:51.928 "base_bdevs_list": [ 00:14:51.928 { 00:14:51.928 "name": "BaseBdev1", 00:14:51.928 "uuid": "137283a6-ff9b-49d8-98b8-97b0c6e49dd8", 00:14:51.928 "is_configured": true, 00:14:51.928 "data_offset": 2048, 00:14:51.928 "data_size": 63488 00:14:51.928 }, 00:14:51.928 { 00:14:51.928 "name": "BaseBdev2", 00:14:51.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.929 "is_configured": false, 00:14:51.929 "data_offset": 0, 00:14:51.929 "data_size": 0 00:14:51.929 } 00:14:51.929 ] 00:14:51.929 }' 00:14:51.929 16:32:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:51.929 16:32:23 -- common/autotest_common.sh@10 -- # set +x 00:14:52.493 16:32:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:52.752 [2024-07-13 16:32:24.113994] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.752 [2024-07-13 16:32:24.114600] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:14:52.752 [2024-07-13 16:32:24.114754] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:52.752 [2024-07-13 16:32:24.115006] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:14:52.752 [2024-07-13 16:32:24.115592] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:14:52.752 [2024-07-13 16:32:24.115734] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:14:52.752 BaseBdev2 00:14:52.752 [2024-07-13 16:32:24.116087] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.752 16:32:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:52.752 16:32:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:52.752 16:32:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:52.752 16:32:24 -- common/autotest_common.sh@889 -- # local i 00:14:52.752 16:32:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:52.752 16:32:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:52.752 16:32:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:53.011 16:32:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:53.270 [ 00:14:53.270 { 00:14:53.270 "name": "BaseBdev2", 00:14:53.270 "aliases": [ 00:14:53.270 "e7fbf9b6-5693-4d62-abe3-c6c85381ecb7" 00:14:53.270 ], 00:14:53.270 "product_name": "Malloc disk", 00:14:53.270 "block_size": 512, 00:14:53.270 "num_blocks": 65536, 00:14:53.270 "uuid": "e7fbf9b6-5693-4d62-abe3-c6c85381ecb7", 00:14:53.270 "assigned_rate_limits": { 00:14:53.270 "rw_ios_per_sec": 0, 00:14:53.270 "rw_mbytes_per_sec": 0, 00:14:53.270 "r_mbytes_per_sec": 0, 00:14:53.270 "w_mbytes_per_sec": 0 00:14:53.270 }, 00:14:53.270 "claimed": true, 00:14:53.270 "claim_type": "exclusive_write", 00:14:53.270 "zoned": false, 00:14:53.270 "supported_io_types": { 00:14:53.270 "read": true, 00:14:53.270 "write": true, 00:14:53.270 "unmap": true, 00:14:53.270 "write_zeroes": true, 00:14:53.270 "flush": true, 00:14:53.270 "reset": true, 00:14:53.271 "compare": false, 00:14:53.271 "compare_and_write": false, 00:14:53.271 "abort": true, 00:14:53.271 "nvme_admin": false, 00:14:53.271 "nvme_io": false 00:14:53.271 }, 00:14:53.271 "memory_domains": [ 00:14:53.271 { 00:14:53.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.271 "dma_device_type": 2 00:14:53.271 } 00:14:53.271 ], 00:14:53.271 "driver_specific": {} 00:14:53.271 } 00:14:53.271 ] 00:14:53.271 16:32:24 -- common/autotest_common.sh@895 -- # return 0 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:53.271 "name": "Existed_Raid", 00:14:53.271 "uuid": "55467dae-bbc4-4bf5-b801-0b3cb195f507", 00:14:53.271 "strip_size_kb": 64, 00:14:53.271 "state": "online", 00:14:53.271 "raid_level": "concat", 00:14:53.271 "superblock": true, 00:14:53.271 "num_base_bdevs": 2, 00:14:53.271 "num_base_bdevs_discovered": 2, 00:14:53.271 "num_base_bdevs_operational": 2, 00:14:53.271 "base_bdevs_list": [ 00:14:53.271 { 00:14:53.271 "name": "BaseBdev1", 00:14:53.271 "uuid": "137283a6-ff9b-49d8-98b8-97b0c6e49dd8", 00:14:53.271 "is_configured": true, 00:14:53.271 "data_offset": 2048, 00:14:53.271 "data_size": 63488 00:14:53.271 }, 00:14:53.271 { 00:14:53.271 "name": "BaseBdev2", 00:14:53.271 "uuid": "e7fbf9b6-5693-4d62-abe3-c6c85381ecb7", 00:14:53.271 "is_configured": true, 00:14:53.271 "data_offset": 2048, 00:14:53.271 "data_size": 63488 00:14:53.271 } 00:14:53.271 ] 00:14:53.271 }' 00:14:53.271 16:32:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:53.271 16:32:24 -- common/autotest_common.sh@10 -- # set +x 00:14:53.839 16:32:25 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:54.097 [2024-07-13 16:32:25.538426] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:54.098 [2024-07-13 16:32:25.538645] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.098 [2024-07-13 16:32:25.538878] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:54.357 "name": "Existed_Raid", 00:14:54.357 "uuid": "55467dae-bbc4-4bf5-b801-0b3cb195f507", 00:14:54.357 "strip_size_kb": 64, 00:14:54.357 "state": "offline", 00:14:54.357 "raid_level": "concat", 00:14:54.357 "superblock": true, 00:14:54.357 "num_base_bdevs": 2, 00:14:54.357 "num_base_bdevs_discovered": 1, 00:14:54.357 "num_base_bdevs_operational": 1, 00:14:54.357 "base_bdevs_list": [ 00:14:54.357 { 00:14:54.357 "name": null, 00:14:54.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.357 "is_configured": false, 00:14:54.357 "data_offset": 2048, 00:14:54.357 "data_size": 63488 00:14:54.357 }, 00:14:54.357 { 00:14:54.357 "name": "BaseBdev2", 00:14:54.357 "uuid": "e7fbf9b6-5693-4d62-abe3-c6c85381ecb7", 00:14:54.357 "is_configured": true, 00:14:54.357 "data_offset": 2048, 00:14:54.357 "data_size": 63488 00:14:54.357 } 00:14:54.357 ] 00:14:54.357 }' 00:14:54.357 16:32:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:54.357 16:32:25 -- common/autotest_common.sh@10 -- # set +x 00:14:54.926 16:32:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:54.926 16:32:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:54.926 16:32:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.926 16:32:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:55.185 16:32:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:55.185 16:32:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:55.186 16:32:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:55.453 [2024-07-13 16:32:26.678495] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.453 [2024-07-13 16:32:26.678802] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:14:55.453 16:32:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:55.453 16:32:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:55.453 16:32:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.453 16:32:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:55.715 16:32:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:55.715 16:32:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:55.715 16:32:26 -- bdev/bdev_raid.sh@287 -- # killprocess 124044 00:14:55.715 16:32:26 -- common/autotest_common.sh@926 -- # '[' -z 124044 ']' 00:14:55.715 16:32:26 -- common/autotest_common.sh@930 -- # kill -0 124044 00:14:55.715 16:32:26 -- common/autotest_common.sh@931 -- # uname 00:14:55.715 16:32:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:55.715 16:32:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124044 00:14:55.715 16:32:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:55.715 16:32:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:55.715 16:32:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124044' 00:14:55.715 killing process with pid 124044 00:14:55.715 16:32:26 -- common/autotest_common.sh@945 -- # kill 124044 00:14:55.715 16:32:26 -- common/autotest_common.sh@950 -- # wait 124044 00:14:55.715 [2024-07-13 16:32:26.956454] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.715 [2024-07-13 16:32:26.956680] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.973 ************************************ 00:14:55.973 END TEST raid_state_function_test_sb 00:14:55.973 ************************************ 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:55.973 00:14:55.973 real 0m9.289s 00:14:55.973 user 0m16.052s 00:14:55.973 sys 0m1.702s 00:14:55.973 16:32:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.973 16:32:27 -- common/autotest_common.sh@10 -- # set +x 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:55.973 16:32:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:55.973 16:32:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:55.973 16:32:27 -- common/autotest_common.sh@10 -- # set +x 00:14:55.973 ************************************ 00:14:55.973 START TEST raid_superblock_test 00:14:55.973 ************************************ 00:14:55.973 16:32:27 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@357 -- # raid_pid=124356 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124356 /var/tmp/spdk-raid.sock 00:14:55.973 16:32:27 -- common/autotest_common.sh@819 -- # '[' -z 124356 ']' 00:14:55.973 16:32:27 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:55.973 16:32:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:55.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:55.973 16:32:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:55.973 16:32:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:55.973 16:32:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:55.973 16:32:27 -- common/autotest_common.sh@10 -- # set +x 00:14:56.232 [2024-07-13 16:32:27.487301] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:56.232 [2024-07-13 16:32:27.487761] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124356 ] 00:14:56.232 [2024-07-13 16:32:27.634013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.491 [2024-07-13 16:32:27.719597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.491 [2024-07-13 16:32:27.798082] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.059 16:32:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:57.059 16:32:28 -- common/autotest_common.sh@852 -- # return 0 00:14:57.059 16:32:28 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:57.059 16:32:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:57.059 16:32:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:57.059 16:32:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:57.059 16:32:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:57.059 16:32:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:57.059 16:32:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:57.059 16:32:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:57.059 16:32:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:57.326 malloc1 00:14:57.326 16:32:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:57.599 [2024-07-13 16:32:28.827637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:57.599 [2024-07-13 16:32:28.827917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.599 [2024-07-13 16:32:28.828001] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:14:57.599 [2024-07-13 16:32:28.828135] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.599 [2024-07-13 16:32:28.831156] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.599 [2024-07-13 16:32:28.831355] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:57.599 pt1 00:14:57.599 16:32:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:57.599 16:32:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:57.599 16:32:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:57.599 16:32:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:57.599 16:32:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:57.599 16:32:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:57.599 16:32:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:57.599 16:32:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:57.599 16:32:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:57.599 malloc2 00:14:57.599 16:32:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:57.858 [2024-07-13 16:32:29.203304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:57.858 [2024-07-13 16:32:29.203639] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.858 [2024-07-13 16:32:29.203718] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:57.858 [2024-07-13 16:32:29.203832] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.858 [2024-07-13 16:32:29.206655] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.859 [2024-07-13 16:32:29.206823] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:57.859 pt2 00:14:57.859 16:32:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:57.859 16:32:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:57.859 16:32:29 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:14:58.118 [2024-07-13 16:32:29.391567] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:58.118 [2024-07-13 16:32:29.394275] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:58.118 [2024-07-13 16:32:29.394618] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:14:58.118 [2024-07-13 16:32:29.394726] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:58.118 [2024-07-13 16:32:29.394933] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:58.118 [2024-07-13 16:32:29.395408] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:14:58.118 [2024-07-13 16:32:29.395508] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:14:58.118 [2024-07-13 16:32:29.395779] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.118 16:32:29 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:58.118 16:32:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:58.118 16:32:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:58.118 16:32:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:58.118 16:32:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:58.118 16:32:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:58.118 16:32:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:58.118 16:32:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:58.118 16:32:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:58.118 16:32:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:58.118 16:32:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.118 16:32:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.377 16:32:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:58.377 "name": "raid_bdev1", 00:14:58.377 "uuid": "78f310fe-e6e4-4682-b60e-8447fa6dea4a", 00:14:58.377 "strip_size_kb": 64, 00:14:58.377 "state": "online", 00:14:58.377 "raid_level": "concat", 00:14:58.377 "superblock": true, 00:14:58.377 "num_base_bdevs": 2, 00:14:58.377 "num_base_bdevs_discovered": 2, 00:14:58.377 "num_base_bdevs_operational": 2, 00:14:58.377 "base_bdevs_list": [ 00:14:58.377 { 00:14:58.377 "name": "pt1", 00:14:58.377 "uuid": "f05e13bf-332f-5cfe-87e6-55ce4ee25918", 00:14:58.377 "is_configured": true, 00:14:58.377 "data_offset": 2048, 00:14:58.377 "data_size": 63488 00:14:58.377 }, 00:14:58.377 { 00:14:58.377 "name": "pt2", 00:14:58.377 "uuid": "d284d655-6101-54cb-9275-34ac546a841c", 00:14:58.377 "is_configured": true, 00:14:58.377 "data_offset": 2048, 00:14:58.377 "data_size": 63488 00:14:58.377 } 00:14:58.377 ] 00:14:58.377 }' 00:14:58.377 16:32:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:58.377 16:32:29 -- common/autotest_common.sh@10 -- # set +x 00:14:58.945 16:32:30 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:58.945 16:32:30 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:59.203 [2024-07-13 16:32:30.544193] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.203 16:32:30 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=78f310fe-e6e4-4682-b60e-8447fa6dea4a 00:14:59.203 16:32:30 -- bdev/bdev_raid.sh@380 -- # '[' -z 78f310fe-e6e4-4682-b60e-8447fa6dea4a ']' 00:14:59.203 16:32:30 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:59.462 [2024-07-13 16:32:30.748005] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.462 [2024-07-13 16:32:30.748236] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.462 [2024-07-13 16:32:30.748502] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.462 [2024-07-13 16:32:30.748659] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.462 [2024-07-13 16:32:30.748741] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:14:59.462 16:32:30 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.462 16:32:30 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:59.721 16:32:31 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:59.721 16:32:31 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:59.721 16:32:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:59.721 16:32:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:59.979 16:32:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:59.979 16:32:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:59.979 16:32:31 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:59.979 16:32:31 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:00.238 16:32:31 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:00.238 16:32:31 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:00.238 16:32:31 -- common/autotest_common.sh@640 -- # local es=0 00:15:00.238 16:32:31 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:00.238 16:32:31 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.238 16:32:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:00.238 16:32:31 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.238 16:32:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:00.238 16:32:31 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.238 16:32:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:00.238 16:32:31 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.238 16:32:31 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:00.238 16:32:31 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:00.496 [2024-07-13 16:32:31.820162] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:00.496 [2024-07-13 16:32:31.822867] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:00.496 [2024-07-13 16:32:31.823061] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:00.496 [2024-07-13 16:32:31.823270] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:00.496 [2024-07-13 16:32:31.823399] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.496 [2024-07-13 16:32:31.823435] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:15:00.496 request: 00:15:00.496 { 00:15:00.496 "name": "raid_bdev1", 00:15:00.496 "raid_level": "concat", 00:15:00.496 "base_bdevs": [ 00:15:00.496 "malloc1", 00:15:00.496 "malloc2" 00:15:00.496 ], 00:15:00.496 "superblock": false, 00:15:00.496 "strip_size_kb": 64, 00:15:00.496 "method": "bdev_raid_create", 00:15:00.496 "req_id": 1 00:15:00.496 } 00:15:00.496 Got JSON-RPC error response 00:15:00.496 response: 00:15:00.496 { 00:15:00.496 "code": -17, 00:15:00.496 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:00.496 } 00:15:00.496 16:32:31 -- common/autotest_common.sh@643 -- # es=1 00:15:00.496 16:32:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:00.496 16:32:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:00.496 16:32:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:00.496 16:32:31 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.496 16:32:31 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:00.754 16:32:32 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:00.754 16:32:32 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:00.754 16:32:32 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:01.012 [2024-07-13 16:32:32.260210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:01.012 [2024-07-13 16:32:32.260499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.012 [2024-07-13 16:32:32.260599] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:01.012 [2024-07-13 16:32:32.260707] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.012 [2024-07-13 16:32:32.263509] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.012 [2024-07-13 16:32:32.263671] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:01.012 [2024-07-13 16:32:32.263853] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:01.012 [2024-07-13 16:32:32.264015] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:01.012 pt1 00:15:01.012 16:32:32 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:01.012 16:32:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:01.012 16:32:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:01.012 16:32:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:01.012 16:32:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:01.013 16:32:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:01.013 16:32:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.013 16:32:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.013 16:32:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.013 16:32:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.013 16:32:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.013 16:32:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.271 16:32:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.271 "name": "raid_bdev1", 00:15:01.271 "uuid": "78f310fe-e6e4-4682-b60e-8447fa6dea4a", 00:15:01.271 "strip_size_kb": 64, 00:15:01.271 "state": "configuring", 00:15:01.271 "raid_level": "concat", 00:15:01.271 "superblock": true, 00:15:01.271 "num_base_bdevs": 2, 00:15:01.271 "num_base_bdevs_discovered": 1, 00:15:01.271 "num_base_bdevs_operational": 2, 00:15:01.271 "base_bdevs_list": [ 00:15:01.271 { 00:15:01.271 "name": "pt1", 00:15:01.271 "uuid": "f05e13bf-332f-5cfe-87e6-55ce4ee25918", 00:15:01.271 "is_configured": true, 00:15:01.271 "data_offset": 2048, 00:15:01.271 "data_size": 63488 00:15:01.271 }, 00:15:01.271 { 00:15:01.271 "name": null, 00:15:01.271 "uuid": "d284d655-6101-54cb-9275-34ac546a841c", 00:15:01.271 "is_configured": false, 00:15:01.271 "data_offset": 2048, 00:15:01.271 "data_size": 63488 00:15:01.271 } 00:15:01.271 ] 00:15:01.271 }' 00:15:01.271 16:32:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.271 16:32:32 -- common/autotest_common.sh@10 -- # set +x 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.842 [2024-07-13 16:32:33.196437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.842 [2024-07-13 16:32:33.196725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.842 [2024-07-13 16:32:33.196801] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:15:01.842 [2024-07-13 16:32:33.196902] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.842 [2024-07-13 16:32:33.197422] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.842 [2024-07-13 16:32:33.197641] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.842 [2024-07-13 16:32:33.197829] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:01.842 [2024-07-13 16:32:33.197966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.842 [2024-07-13 16:32:33.198135] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:01.842 [2024-07-13 16:32:33.198222] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:01.842 [2024-07-13 16:32:33.198349] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:01.842 [2024-07-13 16:32:33.198834] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:01.842 [2024-07-13 16:32:33.198936] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:01.842 [2024-07-13 16:32:33.199126] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.842 pt2 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.842 16:32:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.098 16:32:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:02.098 "name": "raid_bdev1", 00:15:02.098 "uuid": "78f310fe-e6e4-4682-b60e-8447fa6dea4a", 00:15:02.098 "strip_size_kb": 64, 00:15:02.098 "state": "online", 00:15:02.098 "raid_level": "concat", 00:15:02.098 "superblock": true, 00:15:02.098 "num_base_bdevs": 2, 00:15:02.098 "num_base_bdevs_discovered": 2, 00:15:02.098 "num_base_bdevs_operational": 2, 00:15:02.098 "base_bdevs_list": [ 00:15:02.098 { 00:15:02.098 "name": "pt1", 00:15:02.098 "uuid": "f05e13bf-332f-5cfe-87e6-55ce4ee25918", 00:15:02.098 "is_configured": true, 00:15:02.098 "data_offset": 2048, 00:15:02.098 "data_size": 63488 00:15:02.098 }, 00:15:02.098 { 00:15:02.098 "name": "pt2", 00:15:02.098 "uuid": "d284d655-6101-54cb-9275-34ac546a841c", 00:15:02.098 "is_configured": true, 00:15:02.098 "data_offset": 2048, 00:15:02.098 "data_size": 63488 00:15:02.098 } 00:15:02.098 ] 00:15:02.098 }' 00:15:02.098 16:32:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:02.098 16:32:33 -- common/autotest_common.sh@10 -- # set +x 00:15:02.664 16:32:34 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:02.664 16:32:34 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:02.922 [2024-07-13 16:32:34.264885] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.922 16:32:34 -- bdev/bdev_raid.sh@430 -- # '[' 78f310fe-e6e4-4682-b60e-8447fa6dea4a '!=' 78f310fe-e6e4-4682-b60e-8447fa6dea4a ']' 00:15:02.922 16:32:34 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:02.922 16:32:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:02.922 16:32:34 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:02.922 16:32:34 -- bdev/bdev_raid.sh@511 -- # killprocess 124356 00:15:02.922 16:32:34 -- common/autotest_common.sh@926 -- # '[' -z 124356 ']' 00:15:02.922 16:32:34 -- common/autotest_common.sh@930 -- # kill -0 124356 00:15:02.922 16:32:34 -- common/autotest_common.sh@931 -- # uname 00:15:02.922 16:32:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:02.922 16:32:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124356 00:15:02.922 16:32:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:02.922 16:32:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:02.922 16:32:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124356' 00:15:02.922 killing process with pid 124356 00:15:02.922 16:32:34 -- common/autotest_common.sh@945 -- # kill 124356 00:15:02.922 [2024-07-13 16:32:34.320450] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.922 16:32:34 -- common/autotest_common.sh@950 -- # wait 124356 00:15:02.922 [2024-07-13 16:32:34.320599] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.922 [2024-07-13 16:32:34.320656] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.922 [2024-07-13 16:32:34.320665] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:02.922 [2024-07-13 16:32:34.361072] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:03.488 00:15:03.488 real 0m7.328s 00:15:03.488 user 0m12.549s 00:15:03.488 sys 0m1.309s 00:15:03.488 16:32:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.488 16:32:34 -- common/autotest_common.sh@10 -- # set +x 00:15:03.488 ************************************ 00:15:03.488 END TEST raid_superblock_test 00:15:03.488 ************************************ 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:03.488 16:32:34 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:03.488 16:32:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:03.488 16:32:34 -- common/autotest_common.sh@10 -- # set +x 00:15:03.488 ************************************ 00:15:03.488 START TEST raid_state_function_test 00:15:03.488 ************************************ 00:15:03.488 16:32:34 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@226 -- # raid_pid=124592 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124592' 00:15:03.488 Process raid pid: 124592 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:03.488 16:32:34 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124592 /var/tmp/spdk-raid.sock 00:15:03.488 16:32:34 -- common/autotest_common.sh@819 -- # '[' -z 124592 ']' 00:15:03.488 16:32:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:03.488 16:32:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:03.488 16:32:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:03.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:03.488 16:32:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:03.488 16:32:34 -- common/autotest_common.sh@10 -- # set +x 00:15:03.488 [2024-07-13 16:32:34.906388] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:03.488 [2024-07-13 16:32:34.906941] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.744 [2024-07-13 16:32:35.062841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.744 [2024-07-13 16:32:35.141279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.002 [2024-07-13 16:32:35.219093] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.567 16:32:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:04.567 16:32:35 -- common/autotest_common.sh@852 -- # return 0 00:15:04.567 16:32:35 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:04.824 [2024-07-13 16:32:36.142992] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.825 [2024-07-13 16:32:36.143293] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.825 [2024-07-13 16:32:36.143406] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.825 [2024-07-13 16:32:36.143463] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.825 16:32:36 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:04.825 16:32:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:04.825 16:32:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:04.825 16:32:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:04.825 16:32:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:04.825 16:32:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:04.825 16:32:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:04.825 16:32:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:04.825 16:32:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:04.825 16:32:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:04.825 16:32:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.825 16:32:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.082 16:32:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:05.082 "name": "Existed_Raid", 00:15:05.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.082 "strip_size_kb": 0, 00:15:05.082 "state": "configuring", 00:15:05.082 "raid_level": "raid1", 00:15:05.082 "superblock": false, 00:15:05.082 "num_base_bdevs": 2, 00:15:05.082 "num_base_bdevs_discovered": 0, 00:15:05.082 "num_base_bdevs_operational": 2, 00:15:05.082 "base_bdevs_list": [ 00:15:05.082 { 00:15:05.082 "name": "BaseBdev1", 00:15:05.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.082 "is_configured": false, 00:15:05.082 "data_offset": 0, 00:15:05.082 "data_size": 0 00:15:05.082 }, 00:15:05.082 { 00:15:05.082 "name": "BaseBdev2", 00:15:05.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.082 "is_configured": false, 00:15:05.082 "data_offset": 0, 00:15:05.082 "data_size": 0 00:15:05.082 } 00:15:05.082 ] 00:15:05.082 }' 00:15:05.082 16:32:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:05.082 16:32:36 -- common/autotest_common.sh@10 -- # set +x 00:15:05.646 16:32:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:05.646 [2024-07-13 16:32:37.074992] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.646 [2024-07-13 16:32:37.075230] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:05.646 16:32:37 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:05.902 [2024-07-13 16:32:37.331086] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.902 [2024-07-13 16:32:37.331399] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.902 [2024-07-13 16:32:37.331493] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.902 [2024-07-13 16:32:37.331552] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:05.902 16:32:37 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:06.161 [2024-07-13 16:32:37.602980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.161 BaseBdev1 00:15:06.161 16:32:37 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:06.161 16:32:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:06.161 16:32:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:06.161 16:32:37 -- common/autotest_common.sh@889 -- # local i 00:15:06.161 16:32:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:06.161 16:32:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:06.161 16:32:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:06.728 16:32:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:06.728 [ 00:15:06.728 { 00:15:06.728 "name": "BaseBdev1", 00:15:06.728 "aliases": [ 00:15:06.728 "fa9d4f8f-9f78-4ac7-935f-7b54c515511e" 00:15:06.728 ], 00:15:06.728 "product_name": "Malloc disk", 00:15:06.728 "block_size": 512, 00:15:06.728 "num_blocks": 65536, 00:15:06.728 "uuid": "fa9d4f8f-9f78-4ac7-935f-7b54c515511e", 00:15:06.728 "assigned_rate_limits": { 00:15:06.728 "rw_ios_per_sec": 0, 00:15:06.728 "rw_mbytes_per_sec": 0, 00:15:06.728 "r_mbytes_per_sec": 0, 00:15:06.728 "w_mbytes_per_sec": 0 00:15:06.728 }, 00:15:06.728 "claimed": true, 00:15:06.728 "claim_type": "exclusive_write", 00:15:06.728 "zoned": false, 00:15:06.728 "supported_io_types": { 00:15:06.728 "read": true, 00:15:06.728 "write": true, 00:15:06.728 "unmap": true, 00:15:06.728 "write_zeroes": true, 00:15:06.728 "flush": true, 00:15:06.728 "reset": true, 00:15:06.728 "compare": false, 00:15:06.728 "compare_and_write": false, 00:15:06.728 "abort": true, 00:15:06.728 "nvme_admin": false, 00:15:06.728 "nvme_io": false 00:15:06.728 }, 00:15:06.728 "memory_domains": [ 00:15:06.728 { 00:15:06.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.728 "dma_device_type": 2 00:15:06.728 } 00:15:06.728 ], 00:15:06.728 "driver_specific": {} 00:15:06.728 } 00:15:06.728 ] 00:15:06.728 16:32:38 -- common/autotest_common.sh@895 -- # return 0 00:15:06.728 16:32:38 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:06.728 16:32:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:06.728 16:32:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:06.728 16:32:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:06.728 16:32:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:06.728 16:32:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:06.728 16:32:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.728 16:32:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.728 16:32:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.728 16:32:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.728 16:32:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.728 16:32:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.988 16:32:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.988 "name": "Existed_Raid", 00:15:06.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.988 "strip_size_kb": 0, 00:15:06.988 "state": "configuring", 00:15:06.988 "raid_level": "raid1", 00:15:06.988 "superblock": false, 00:15:06.988 "num_base_bdevs": 2, 00:15:06.988 "num_base_bdevs_discovered": 1, 00:15:06.988 "num_base_bdevs_operational": 2, 00:15:06.988 "base_bdevs_list": [ 00:15:06.988 { 00:15:06.988 "name": "BaseBdev1", 00:15:06.988 "uuid": "fa9d4f8f-9f78-4ac7-935f-7b54c515511e", 00:15:06.988 "is_configured": true, 00:15:06.988 "data_offset": 0, 00:15:06.988 "data_size": 65536 00:15:06.988 }, 00:15:06.988 { 00:15:06.988 "name": "BaseBdev2", 00:15:06.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.988 "is_configured": false, 00:15:06.988 "data_offset": 0, 00:15:06.988 "data_size": 0 00:15:06.988 } 00:15:06.988 ] 00:15:06.988 }' 00:15:06.988 16:32:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.988 16:32:38 -- common/autotest_common.sh@10 -- # set +x 00:15:07.555 16:32:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:07.813 [2024-07-13 16:32:39.103321] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:07.813 [2024-07-13 16:32:39.103624] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:07.813 16:32:39 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:07.813 16:32:39 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:08.071 [2024-07-13 16:32:39.347453] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.071 [2024-07-13 16:32:39.350207] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:08.071 [2024-07-13 16:32:39.350406] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.071 16:32:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.329 16:32:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.329 "name": "Existed_Raid", 00:15:08.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.329 "strip_size_kb": 0, 00:15:08.329 "state": "configuring", 00:15:08.329 "raid_level": "raid1", 00:15:08.329 "superblock": false, 00:15:08.329 "num_base_bdevs": 2, 00:15:08.329 "num_base_bdevs_discovered": 1, 00:15:08.329 "num_base_bdevs_operational": 2, 00:15:08.329 "base_bdevs_list": [ 00:15:08.329 { 00:15:08.329 "name": "BaseBdev1", 00:15:08.329 "uuid": "fa9d4f8f-9f78-4ac7-935f-7b54c515511e", 00:15:08.329 "is_configured": true, 00:15:08.329 "data_offset": 0, 00:15:08.329 "data_size": 65536 00:15:08.329 }, 00:15:08.329 { 00:15:08.329 "name": "BaseBdev2", 00:15:08.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.329 "is_configured": false, 00:15:08.329 "data_offset": 0, 00:15:08.329 "data_size": 0 00:15:08.329 } 00:15:08.329 ] 00:15:08.329 }' 00:15:08.329 16:32:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.329 16:32:39 -- common/autotest_common.sh@10 -- # set +x 00:15:08.895 16:32:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:09.153 [2024-07-13 16:32:40.390049] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.153 [2024-07-13 16:32:40.390430] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:15:09.153 [2024-07-13 16:32:40.390495] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:09.153 [2024-07-13 16:32:40.390835] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:15:09.153 [2024-07-13 16:32:40.391604] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:15:09.153 [2024-07-13 16:32:40.391765] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:15:09.153 [2024-07-13 16:32:40.392297] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.153 BaseBdev2 00:15:09.153 16:32:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:09.153 16:32:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:09.153 16:32:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:09.153 16:32:40 -- common/autotest_common.sh@889 -- # local i 00:15:09.153 16:32:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:09.153 16:32:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:09.153 16:32:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:09.411 16:32:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:09.411 [ 00:15:09.411 { 00:15:09.411 "name": "BaseBdev2", 00:15:09.411 "aliases": [ 00:15:09.411 "84a9f500-4117-4775-9388-16a54564294f" 00:15:09.411 ], 00:15:09.411 "product_name": "Malloc disk", 00:15:09.411 "block_size": 512, 00:15:09.411 "num_blocks": 65536, 00:15:09.411 "uuid": "84a9f500-4117-4775-9388-16a54564294f", 00:15:09.411 "assigned_rate_limits": { 00:15:09.411 "rw_ios_per_sec": 0, 00:15:09.411 "rw_mbytes_per_sec": 0, 00:15:09.411 "r_mbytes_per_sec": 0, 00:15:09.411 "w_mbytes_per_sec": 0 00:15:09.411 }, 00:15:09.411 "claimed": true, 00:15:09.411 "claim_type": "exclusive_write", 00:15:09.411 "zoned": false, 00:15:09.411 "supported_io_types": { 00:15:09.411 "read": true, 00:15:09.411 "write": true, 00:15:09.411 "unmap": true, 00:15:09.411 "write_zeroes": true, 00:15:09.411 "flush": true, 00:15:09.411 "reset": true, 00:15:09.411 "compare": false, 00:15:09.411 "compare_and_write": false, 00:15:09.411 "abort": true, 00:15:09.411 "nvme_admin": false, 00:15:09.411 "nvme_io": false 00:15:09.411 }, 00:15:09.411 "memory_domains": [ 00:15:09.411 { 00:15:09.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.411 "dma_device_type": 2 00:15:09.411 } 00:15:09.411 ], 00:15:09.411 "driver_specific": {} 00:15:09.411 } 00:15:09.411 ] 00:15:09.411 16:32:40 -- common/autotest_common.sh@895 -- # return 0 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.411 16:32:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.671 16:32:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.671 "name": "Existed_Raid", 00:15:09.671 "uuid": "a6350752-db36-4d22-bc8e-868528782ea3", 00:15:09.671 "strip_size_kb": 0, 00:15:09.671 "state": "online", 00:15:09.671 "raid_level": "raid1", 00:15:09.671 "superblock": false, 00:15:09.671 "num_base_bdevs": 2, 00:15:09.671 "num_base_bdevs_discovered": 2, 00:15:09.671 "num_base_bdevs_operational": 2, 00:15:09.671 "base_bdevs_list": [ 00:15:09.671 { 00:15:09.671 "name": "BaseBdev1", 00:15:09.671 "uuid": "fa9d4f8f-9f78-4ac7-935f-7b54c515511e", 00:15:09.671 "is_configured": true, 00:15:09.671 "data_offset": 0, 00:15:09.671 "data_size": 65536 00:15:09.671 }, 00:15:09.671 { 00:15:09.671 "name": "BaseBdev2", 00:15:09.671 "uuid": "84a9f500-4117-4775-9388-16a54564294f", 00:15:09.671 "is_configured": true, 00:15:09.671 "data_offset": 0, 00:15:09.671 "data_size": 65536 00:15:09.671 } 00:15:09.671 ] 00:15:09.671 }' 00:15:09.671 16:32:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.671 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:15:10.238 16:32:41 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:10.497 [2024-07-13 16:32:41.842525] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.497 16:32:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.755 16:32:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:10.755 "name": "Existed_Raid", 00:15:10.755 "uuid": "a6350752-db36-4d22-bc8e-868528782ea3", 00:15:10.755 "strip_size_kb": 0, 00:15:10.755 "state": "online", 00:15:10.755 "raid_level": "raid1", 00:15:10.755 "superblock": false, 00:15:10.755 "num_base_bdevs": 2, 00:15:10.755 "num_base_bdevs_discovered": 1, 00:15:10.755 "num_base_bdevs_operational": 1, 00:15:10.755 "base_bdevs_list": [ 00:15:10.755 { 00:15:10.755 "name": null, 00:15:10.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.755 "is_configured": false, 00:15:10.755 "data_offset": 0, 00:15:10.755 "data_size": 65536 00:15:10.755 }, 00:15:10.755 { 00:15:10.755 "name": "BaseBdev2", 00:15:10.755 "uuid": "84a9f500-4117-4775-9388-16a54564294f", 00:15:10.755 "is_configured": true, 00:15:10.755 "data_offset": 0, 00:15:10.755 "data_size": 65536 00:15:10.755 } 00:15:10.755 ] 00:15:10.755 }' 00:15:10.755 16:32:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:10.755 16:32:42 -- common/autotest_common.sh@10 -- # set +x 00:15:11.323 16:32:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:11.323 16:32:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:11.323 16:32:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.323 16:32:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:11.582 16:32:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:11.582 16:32:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:11.582 16:32:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:11.840 [2024-07-13 16:32:43.127589] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:11.840 [2024-07-13 16:32:43.127821] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.840 [2024-07-13 16:32:43.127993] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.840 [2024-07-13 16:32:43.148594] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.840 [2024-07-13 16:32:43.148830] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:15:11.840 16:32:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:11.840 16:32:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:11.840 16:32:43 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:11.840 16:32:43 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.097 16:32:43 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:12.097 16:32:43 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:12.097 16:32:43 -- bdev/bdev_raid.sh@287 -- # killprocess 124592 00:15:12.097 16:32:43 -- common/autotest_common.sh@926 -- # '[' -z 124592 ']' 00:15:12.097 16:32:43 -- common/autotest_common.sh@930 -- # kill -0 124592 00:15:12.097 16:32:43 -- common/autotest_common.sh@931 -- # uname 00:15:12.097 16:32:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:12.097 16:32:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124592 00:15:12.097 killing process with pid 124592 00:15:12.097 16:32:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:12.097 16:32:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:12.097 16:32:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124592' 00:15:12.097 16:32:43 -- common/autotest_common.sh@945 -- # kill 124592 00:15:12.097 16:32:43 -- common/autotest_common.sh@950 -- # wait 124592 00:15:12.097 [2024-07-13 16:32:43.443357] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.097 [2024-07-13 16:32:43.443463] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.662 ************************************ 00:15:12.662 END TEST raid_state_function_test 00:15:12.662 ************************************ 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:12.662 00:15:12.662 real 0m9.011s 00:15:12.662 user 0m15.618s 00:15:12.662 sys 0m1.688s 00:15:12.662 16:32:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:12.662 16:32:43 -- common/autotest_common.sh@10 -- # set +x 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:12.662 16:32:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:12.662 16:32:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:12.662 16:32:43 -- common/autotest_common.sh@10 -- # set +x 00:15:12.662 ************************************ 00:15:12.662 START TEST raid_state_function_test_sb 00:15:12.662 ************************************ 00:15:12.662 16:32:43 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=124901 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124901' 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:12.662 Process raid pid: 124901 00:15:12.662 16:32:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124901 /var/tmp/spdk-raid.sock 00:15:12.662 16:32:43 -- common/autotest_common.sh@819 -- # '[' -z 124901 ']' 00:15:12.662 16:32:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:12.662 16:32:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:12.662 16:32:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:12.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:12.662 16:32:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:12.662 16:32:43 -- common/autotest_common.sh@10 -- # set +x 00:15:12.662 [2024-07-13 16:32:43.976479] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:12.662 [2024-07-13 16:32:43.976866] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.662 [2024-07-13 16:32:44.121525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.920 [2024-07-13 16:32:44.202029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.920 [2024-07-13 16:32:44.280463] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.486 16:32:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:13.486 16:32:44 -- common/autotest_common.sh@852 -- # return 0 00:15:13.486 16:32:44 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:13.745 [2024-07-13 16:32:45.185537] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:13.745 [2024-07-13 16:32:45.185869] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:13.745 [2024-07-13 16:32:45.185982] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.745 [2024-07-13 16:32:45.186038] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.745 16:32:45 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:13.745 16:32:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:13.745 16:32:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:13.745 16:32:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:13.745 16:32:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:13.745 16:32:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:13.745 16:32:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.745 16:32:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.745 16:32:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.745 16:32:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.745 16:32:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.745 16:32:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.004 16:32:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.004 "name": "Existed_Raid", 00:15:14.004 "uuid": "9eb0a181-5577-43ea-ae02-dc4505d5896a", 00:15:14.004 "strip_size_kb": 0, 00:15:14.004 "state": "configuring", 00:15:14.004 "raid_level": "raid1", 00:15:14.004 "superblock": true, 00:15:14.004 "num_base_bdevs": 2, 00:15:14.004 "num_base_bdevs_discovered": 0, 00:15:14.004 "num_base_bdevs_operational": 2, 00:15:14.004 "base_bdevs_list": [ 00:15:14.004 { 00:15:14.004 "name": "BaseBdev1", 00:15:14.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.004 "is_configured": false, 00:15:14.004 "data_offset": 0, 00:15:14.004 "data_size": 0 00:15:14.004 }, 00:15:14.004 { 00:15:14.004 "name": "BaseBdev2", 00:15:14.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.004 "is_configured": false, 00:15:14.004 "data_offset": 0, 00:15:14.004 "data_size": 0 00:15:14.004 } 00:15:14.004 ] 00:15:14.004 }' 00:15:14.004 16:32:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.004 16:32:45 -- common/autotest_common.sh@10 -- # set +x 00:15:14.960 16:32:46 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:14.960 [2024-07-13 16:32:46.345574] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.960 [2024-07-13 16:32:46.345850] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:14.960 16:32:46 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:15.219 [2024-07-13 16:32:46.601708] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:15.219 [2024-07-13 16:32:46.602055] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:15.219 [2024-07-13 16:32:46.602169] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.219 [2024-07-13 16:32:46.602233] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.219 16:32:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:15.479 [2024-07-13 16:32:46.818009] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.479 BaseBdev1 00:15:15.479 16:32:46 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:15.479 16:32:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:15.479 16:32:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:15.479 16:32:46 -- common/autotest_common.sh@889 -- # local i 00:15:15.479 16:32:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:15.479 16:32:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:15.479 16:32:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.737 16:32:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:15.996 [ 00:15:15.996 { 00:15:15.996 "name": "BaseBdev1", 00:15:15.996 "aliases": [ 00:15:15.996 "de846ea1-5d05-458e-90ef-d93ebb5ac4ac" 00:15:15.996 ], 00:15:15.996 "product_name": "Malloc disk", 00:15:15.996 "block_size": 512, 00:15:15.996 "num_blocks": 65536, 00:15:15.996 "uuid": "de846ea1-5d05-458e-90ef-d93ebb5ac4ac", 00:15:15.996 "assigned_rate_limits": { 00:15:15.996 "rw_ios_per_sec": 0, 00:15:15.996 "rw_mbytes_per_sec": 0, 00:15:15.996 "r_mbytes_per_sec": 0, 00:15:15.996 "w_mbytes_per_sec": 0 00:15:15.996 }, 00:15:15.996 "claimed": true, 00:15:15.996 "claim_type": "exclusive_write", 00:15:15.996 "zoned": false, 00:15:15.996 "supported_io_types": { 00:15:15.996 "read": true, 00:15:15.996 "write": true, 00:15:15.996 "unmap": true, 00:15:15.996 "write_zeroes": true, 00:15:15.996 "flush": true, 00:15:15.996 "reset": true, 00:15:15.996 "compare": false, 00:15:15.996 "compare_and_write": false, 00:15:15.996 "abort": true, 00:15:15.996 "nvme_admin": false, 00:15:15.996 "nvme_io": false 00:15:15.996 }, 00:15:15.996 "memory_domains": [ 00:15:15.996 { 00:15:15.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.996 "dma_device_type": 2 00:15:15.996 } 00:15:15.996 ], 00:15:15.996 "driver_specific": {} 00:15:15.996 } 00:15:15.996 ] 00:15:15.996 16:32:47 -- common/autotest_common.sh@895 -- # return 0 00:15:15.996 16:32:47 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:15.996 16:32:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:15.996 16:32:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:15.996 16:32:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:15.996 16:32:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:15.996 16:32:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:15.996 16:32:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:15.996 16:32:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:15.996 16:32:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:15.996 16:32:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:15.996 16:32:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.996 16:32:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.255 16:32:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:16.255 "name": "Existed_Raid", 00:15:16.255 "uuid": "41b1203f-71fd-47bb-a64d-6096d440718b", 00:15:16.255 "strip_size_kb": 0, 00:15:16.255 "state": "configuring", 00:15:16.255 "raid_level": "raid1", 00:15:16.255 "superblock": true, 00:15:16.255 "num_base_bdevs": 2, 00:15:16.255 "num_base_bdevs_discovered": 1, 00:15:16.255 "num_base_bdevs_operational": 2, 00:15:16.255 "base_bdevs_list": [ 00:15:16.255 { 00:15:16.255 "name": "BaseBdev1", 00:15:16.255 "uuid": "de846ea1-5d05-458e-90ef-d93ebb5ac4ac", 00:15:16.255 "is_configured": true, 00:15:16.255 "data_offset": 2048, 00:15:16.255 "data_size": 63488 00:15:16.255 }, 00:15:16.255 { 00:15:16.255 "name": "BaseBdev2", 00:15:16.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.255 "is_configured": false, 00:15:16.255 "data_offset": 0, 00:15:16.255 "data_size": 0 00:15:16.255 } 00:15:16.255 ] 00:15:16.255 }' 00:15:16.255 16:32:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:16.255 16:32:47 -- common/autotest_common.sh@10 -- # set +x 00:15:16.822 16:32:48 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:16.822 [2024-07-13 16:32:48.242344] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:16.822 [2024-07-13 16:32:48.242692] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:16.822 16:32:48 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:16.822 16:32:48 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:17.081 16:32:48 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:17.340 BaseBdev1 00:15:17.340 16:32:48 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:17.340 16:32:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:17.340 16:32:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:17.340 16:32:48 -- common/autotest_common.sh@889 -- # local i 00:15:17.340 16:32:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:17.340 16:32:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:17.340 16:32:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:17.599 16:32:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:17.599 [ 00:15:17.599 { 00:15:17.599 "name": "BaseBdev1", 00:15:17.599 "aliases": [ 00:15:17.599 "8a7df52b-2dae-4da7-a080-1f41d04c3962" 00:15:17.599 ], 00:15:17.599 "product_name": "Malloc disk", 00:15:17.599 "block_size": 512, 00:15:17.599 "num_blocks": 65536, 00:15:17.599 "uuid": "8a7df52b-2dae-4da7-a080-1f41d04c3962", 00:15:17.599 "assigned_rate_limits": { 00:15:17.599 "rw_ios_per_sec": 0, 00:15:17.599 "rw_mbytes_per_sec": 0, 00:15:17.599 "r_mbytes_per_sec": 0, 00:15:17.599 "w_mbytes_per_sec": 0 00:15:17.599 }, 00:15:17.599 "claimed": false, 00:15:17.599 "zoned": false, 00:15:17.599 "supported_io_types": { 00:15:17.599 "read": true, 00:15:17.599 "write": true, 00:15:17.599 "unmap": true, 00:15:17.599 "write_zeroes": true, 00:15:17.599 "flush": true, 00:15:17.599 "reset": true, 00:15:17.599 "compare": false, 00:15:17.599 "compare_and_write": false, 00:15:17.599 "abort": true, 00:15:17.599 "nvme_admin": false, 00:15:17.599 "nvme_io": false 00:15:17.599 }, 00:15:17.599 "memory_domains": [ 00:15:17.599 { 00:15:17.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.600 "dma_device_type": 2 00:15:17.600 } 00:15:17.600 ], 00:15:17.600 "driver_specific": {} 00:15:17.600 } 00:15:17.600 ] 00:15:17.600 16:32:49 -- common/autotest_common.sh@895 -- # return 0 00:15:17.600 16:32:49 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:17.858 [2024-07-13 16:32:49.227397] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.858 [2024-07-13 16:32:49.230114] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:17.858 [2024-07-13 16:32:49.230310] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:17.858 16:32:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:17.858 16:32:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:17.858 16:32:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:17.858 16:32:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:17.858 16:32:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:17.858 16:32:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:17.858 16:32:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:17.858 16:32:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:17.858 16:32:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:17.858 16:32:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:17.858 16:32:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:17.858 16:32:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:17.858 16:32:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.859 16:32:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.117 16:32:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.117 "name": "Existed_Raid", 00:15:18.117 "uuid": "5f0ff54f-f37f-4963-8eec-2debd77a6443", 00:15:18.117 "strip_size_kb": 0, 00:15:18.117 "state": "configuring", 00:15:18.117 "raid_level": "raid1", 00:15:18.117 "superblock": true, 00:15:18.117 "num_base_bdevs": 2, 00:15:18.117 "num_base_bdevs_discovered": 1, 00:15:18.117 "num_base_bdevs_operational": 2, 00:15:18.117 "base_bdevs_list": [ 00:15:18.117 { 00:15:18.117 "name": "BaseBdev1", 00:15:18.117 "uuid": "8a7df52b-2dae-4da7-a080-1f41d04c3962", 00:15:18.117 "is_configured": true, 00:15:18.117 "data_offset": 2048, 00:15:18.117 "data_size": 63488 00:15:18.117 }, 00:15:18.117 { 00:15:18.117 "name": "BaseBdev2", 00:15:18.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.117 "is_configured": false, 00:15:18.117 "data_offset": 0, 00:15:18.117 "data_size": 0 00:15:18.117 } 00:15:18.117 ] 00:15:18.117 }' 00:15:18.117 16:32:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.117 16:32:49 -- common/autotest_common.sh@10 -- # set +x 00:15:18.686 16:32:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:18.945 [2024-07-13 16:32:50.365920] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.945 [2024-07-13 16:32:50.366485] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:15:18.945 [2024-07-13 16:32:50.366642] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:18.945 [2024-07-13 16:32:50.366878] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:15:18.945 [2024-07-13 16:32:50.367550] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:15:18.945 [2024-07-13 16:32:50.367691] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:15:18.945 [2024-07-13 16:32:50.367996] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.945 BaseBdev2 00:15:18.945 16:32:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:18.945 16:32:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:18.946 16:32:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:18.946 16:32:50 -- common/autotest_common.sh@889 -- # local i 00:15:18.946 16:32:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:18.946 16:32:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:18.946 16:32:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.204 16:32:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:19.464 [ 00:15:19.464 { 00:15:19.464 "name": "BaseBdev2", 00:15:19.464 "aliases": [ 00:15:19.464 "643eb856-1433-4a95-9355-6b7814917ee9" 00:15:19.464 ], 00:15:19.464 "product_name": "Malloc disk", 00:15:19.464 "block_size": 512, 00:15:19.464 "num_blocks": 65536, 00:15:19.464 "uuid": "643eb856-1433-4a95-9355-6b7814917ee9", 00:15:19.464 "assigned_rate_limits": { 00:15:19.464 "rw_ios_per_sec": 0, 00:15:19.464 "rw_mbytes_per_sec": 0, 00:15:19.464 "r_mbytes_per_sec": 0, 00:15:19.464 "w_mbytes_per_sec": 0 00:15:19.464 }, 00:15:19.464 "claimed": true, 00:15:19.464 "claim_type": "exclusive_write", 00:15:19.464 "zoned": false, 00:15:19.464 "supported_io_types": { 00:15:19.464 "read": true, 00:15:19.464 "write": true, 00:15:19.464 "unmap": true, 00:15:19.464 "write_zeroes": true, 00:15:19.464 "flush": true, 00:15:19.464 "reset": true, 00:15:19.464 "compare": false, 00:15:19.464 "compare_and_write": false, 00:15:19.464 "abort": true, 00:15:19.464 "nvme_admin": false, 00:15:19.464 "nvme_io": false 00:15:19.464 }, 00:15:19.464 "memory_domains": [ 00:15:19.464 { 00:15:19.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.464 "dma_device_type": 2 00:15:19.464 } 00:15:19.464 ], 00:15:19.464 "driver_specific": {} 00:15:19.464 } 00:15:19.464 ] 00:15:19.464 16:32:50 -- common/autotest_common.sh@895 -- # return 0 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.464 16:32:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.723 16:32:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:19.723 "name": "Existed_Raid", 00:15:19.723 "uuid": "5f0ff54f-f37f-4963-8eec-2debd77a6443", 00:15:19.723 "strip_size_kb": 0, 00:15:19.723 "state": "online", 00:15:19.723 "raid_level": "raid1", 00:15:19.723 "superblock": true, 00:15:19.723 "num_base_bdevs": 2, 00:15:19.723 "num_base_bdevs_discovered": 2, 00:15:19.723 "num_base_bdevs_operational": 2, 00:15:19.723 "base_bdevs_list": [ 00:15:19.723 { 00:15:19.723 "name": "BaseBdev1", 00:15:19.723 "uuid": "8a7df52b-2dae-4da7-a080-1f41d04c3962", 00:15:19.723 "is_configured": true, 00:15:19.723 "data_offset": 2048, 00:15:19.723 "data_size": 63488 00:15:19.723 }, 00:15:19.723 { 00:15:19.723 "name": "BaseBdev2", 00:15:19.723 "uuid": "643eb856-1433-4a95-9355-6b7814917ee9", 00:15:19.723 "is_configured": true, 00:15:19.723 "data_offset": 2048, 00:15:19.723 "data_size": 63488 00:15:19.723 } 00:15:19.723 ] 00:15:19.723 }' 00:15:19.723 16:32:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:19.723 16:32:51 -- common/autotest_common.sh@10 -- # set +x 00:15:20.291 16:32:51 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:20.551 [2024-07-13 16:32:51.874359] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.551 16:32:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.810 16:32:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.810 "name": "Existed_Raid", 00:15:20.810 "uuid": "5f0ff54f-f37f-4963-8eec-2debd77a6443", 00:15:20.810 "strip_size_kb": 0, 00:15:20.810 "state": "online", 00:15:20.810 "raid_level": "raid1", 00:15:20.810 "superblock": true, 00:15:20.810 "num_base_bdevs": 2, 00:15:20.810 "num_base_bdevs_discovered": 1, 00:15:20.810 "num_base_bdevs_operational": 1, 00:15:20.810 "base_bdevs_list": [ 00:15:20.810 { 00:15:20.810 "name": null, 00:15:20.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.810 "is_configured": false, 00:15:20.810 "data_offset": 2048, 00:15:20.810 "data_size": 63488 00:15:20.810 }, 00:15:20.810 { 00:15:20.810 "name": "BaseBdev2", 00:15:20.810 "uuid": "643eb856-1433-4a95-9355-6b7814917ee9", 00:15:20.810 "is_configured": true, 00:15:20.810 "data_offset": 2048, 00:15:20.810 "data_size": 63488 00:15:20.810 } 00:15:20.810 ] 00:15:20.810 }' 00:15:20.810 16:32:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.810 16:32:52 -- common/autotest_common.sh@10 -- # set +x 00:15:21.379 16:32:52 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:21.379 16:32:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:21.379 16:32:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.379 16:32:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:21.379 16:32:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:21.379 16:32:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:21.379 16:32:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:21.636 [2024-07-13 16:32:53.078675] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:21.636 [2024-07-13 16:32:53.078930] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.636 [2024-07-13 16:32:53.079115] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.636 [2024-07-13 16:32:53.100443] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.636 [2024-07-13 16:32:53.100665] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:15:21.894 16:32:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:21.894 16:32:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:21.894 16:32:53 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.894 16:32:53 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:22.152 16:32:53 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:22.152 16:32:53 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:22.152 16:32:53 -- bdev/bdev_raid.sh@287 -- # killprocess 124901 00:15:22.152 16:32:53 -- common/autotest_common.sh@926 -- # '[' -z 124901 ']' 00:15:22.152 16:32:53 -- common/autotest_common.sh@930 -- # kill -0 124901 00:15:22.152 16:32:53 -- common/autotest_common.sh@931 -- # uname 00:15:22.152 16:32:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:22.152 16:32:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124901 00:15:22.152 16:32:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:22.152 16:32:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:22.152 16:32:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124901' 00:15:22.152 killing process with pid 124901 00:15:22.152 16:32:53 -- common/autotest_common.sh@945 -- # kill 124901 00:15:22.152 16:32:53 -- common/autotest_common.sh@950 -- # wait 124901 00:15:22.152 [2024-07-13 16:32:53.403157] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.152 [2024-07-13 16:32:53.403379] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.411 ************************************ 00:15:22.411 END TEST raid_state_function_test_sb 00:15:22.411 ************************************ 00:15:22.411 16:32:53 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:22.411 00:15:22.411 real 0m9.893s 00:15:22.411 user 0m17.199s 00:15:22.411 sys 0m1.790s 00:15:22.411 16:32:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.411 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.411 16:32:53 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:22.411 16:32:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:22.411 16:32:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:22.411 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.411 ************************************ 00:15:22.411 START TEST raid_superblock_test 00:15:22.411 ************************************ 00:15:22.411 16:32:53 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:15:22.411 16:32:53 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:15:22.411 16:32:53 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:22.411 16:32:53 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:22.411 16:32:53 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:22.411 16:32:53 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:22.411 16:32:53 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:22.411 16:32:53 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:22.411 16:32:53 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:22.411 16:32:53 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:22.411 16:32:53 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:22.411 16:32:53 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:22.670 16:32:53 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:22.670 16:32:53 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:22.670 16:32:53 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:15:22.670 16:32:53 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:15:22.670 16:32:53 -- bdev/bdev_raid.sh@357 -- # raid_pid=125226 00:15:22.670 16:32:53 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:22.670 16:32:53 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125226 /var/tmp/spdk-raid.sock 00:15:22.670 16:32:53 -- common/autotest_common.sh@819 -- # '[' -z 125226 ']' 00:15:22.670 16:32:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:22.670 16:32:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:22.670 16:32:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:22.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:22.670 16:32:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:22.670 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.670 [2024-07-13 16:32:53.938971] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:22.670 [2024-07-13 16:32:53.939985] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125226 ] 00:15:22.670 [2024-07-13 16:32:54.085120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.929 [2024-07-13 16:32:54.172222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.929 [2024-07-13 16:32:54.251304] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.497 16:32:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:23.497 16:32:54 -- common/autotest_common.sh@852 -- # return 0 00:15:23.497 16:32:54 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:23.497 16:32:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:23.497 16:32:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:23.497 16:32:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:23.497 16:32:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:23.497 16:32:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:23.497 16:32:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:23.497 16:32:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:23.497 16:32:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:23.757 malloc1 00:15:23.757 16:32:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:24.017 [2024-07-13 16:32:55.228913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:24.017 [2024-07-13 16:32:55.229304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.017 [2024-07-13 16:32:55.229399] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:15:24.017 [2024-07-13 16:32:55.229566] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.017 [2024-07-13 16:32:55.232695] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.017 [2024-07-13 16:32:55.232863] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:24.017 pt1 00:15:24.017 16:32:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:24.017 16:32:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:24.017 16:32:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:24.017 16:32:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:24.017 16:32:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:24.017 16:32:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:24.017 16:32:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:24.017 16:32:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:24.017 16:32:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:24.017 malloc2 00:15:24.017 16:32:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:24.276 [2024-07-13 16:32:55.670072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:24.276 [2024-07-13 16:32:55.670436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.276 [2024-07-13 16:32:55.670518] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:24.276 [2024-07-13 16:32:55.670653] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.276 [2024-07-13 16:32:55.673464] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.276 [2024-07-13 16:32:55.673637] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:24.276 pt2 00:15:24.276 16:32:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:24.276 16:32:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:24.276 16:32:55 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:24.535 [2024-07-13 16:32:55.854165] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:24.535 [2024-07-13 16:32:55.856888] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:24.535 [2024-07-13 16:32:55.857278] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:15:24.535 [2024-07-13 16:32:55.857397] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:24.535 [2024-07-13 16:32:55.857633] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:24.535 [2024-07-13 16:32:55.858264] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:15:24.535 [2024-07-13 16:32:55.858375] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:15:24.535 [2024-07-13 16:32:55.858622] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.535 16:32:55 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:24.535 16:32:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:24.535 16:32:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:24.535 16:32:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:24.535 16:32:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:24.535 16:32:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:24.535 16:32:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.535 16:32:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.535 16:32:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.535 16:32:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.535 16:32:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.535 16:32:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.795 16:32:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.795 "name": "raid_bdev1", 00:15:24.795 "uuid": "ea6994a2-89ee-4b1d-8cd0-c9aaa4f54b25", 00:15:24.795 "strip_size_kb": 0, 00:15:24.795 "state": "online", 00:15:24.795 "raid_level": "raid1", 00:15:24.795 "superblock": true, 00:15:24.795 "num_base_bdevs": 2, 00:15:24.795 "num_base_bdevs_discovered": 2, 00:15:24.795 "num_base_bdevs_operational": 2, 00:15:24.795 "base_bdevs_list": [ 00:15:24.795 { 00:15:24.795 "name": "pt1", 00:15:24.795 "uuid": "548064ca-3c07-5a13-8678-3e9f26ad7df0", 00:15:24.795 "is_configured": true, 00:15:24.795 "data_offset": 2048, 00:15:24.795 "data_size": 63488 00:15:24.795 }, 00:15:24.795 { 00:15:24.795 "name": "pt2", 00:15:24.795 "uuid": "d03b4cf9-2cd4-5ce2-8250-47b710e9a325", 00:15:24.795 "is_configured": true, 00:15:24.795 "data_offset": 2048, 00:15:24.795 "data_size": 63488 00:15:24.795 } 00:15:24.795 ] 00:15:24.795 }' 00:15:24.795 16:32:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.795 16:32:56 -- common/autotest_common.sh@10 -- # set +x 00:15:25.364 16:32:56 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:25.364 16:32:56 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:25.623 [2024-07-13 16:32:56.919052] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.623 16:32:56 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=ea6994a2-89ee-4b1d-8cd0-c9aaa4f54b25 00:15:25.623 16:32:56 -- bdev/bdev_raid.sh@380 -- # '[' -z ea6994a2-89ee-4b1d-8cd0-c9aaa4f54b25 ']' 00:15:25.623 16:32:56 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:25.882 [2024-07-13 16:32:57.186887] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:25.882 [2024-07-13 16:32:57.187119] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.882 [2024-07-13 16:32:57.187367] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.882 [2024-07-13 16:32:57.187543] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.882 [2024-07-13 16:32:57.187622] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:15:25.882 16:32:57 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.882 16:32:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:26.141 16:32:57 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:26.141 16:32:57 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:26.141 16:32:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:26.141 16:32:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:26.400 16:32:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:26.400 16:32:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:26.400 16:32:57 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:26.400 16:32:57 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:26.659 16:32:58 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:26.659 16:32:58 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:26.659 16:32:58 -- common/autotest_common.sh@640 -- # local es=0 00:15:26.659 16:32:58 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:26.659 16:32:58 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.659 16:32:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:26.659 16:32:58 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.659 16:32:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:26.659 16:32:58 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.659 16:32:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:26.659 16:32:58 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.659 16:32:58 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:26.659 16:32:58 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:26.919 [2024-07-13 16:32:58.223029] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:26.919 [2024-07-13 16:32:58.225705] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:26.919 [2024-07-13 16:32:58.225915] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:26.919 [2024-07-13 16:32:58.226177] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:26.919 [2024-07-13 16:32:58.226287] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.919 [2024-07-13 16:32:58.226361] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:15:26.919 request: 00:15:26.919 { 00:15:26.919 "name": "raid_bdev1", 00:15:26.919 "raid_level": "raid1", 00:15:26.919 "base_bdevs": [ 00:15:26.919 "malloc1", 00:15:26.919 "malloc2" 00:15:26.919 ], 00:15:26.919 "superblock": false, 00:15:26.919 "method": "bdev_raid_create", 00:15:26.919 "req_id": 1 00:15:26.919 } 00:15:26.919 Got JSON-RPC error response 00:15:26.919 response: 00:15:26.919 { 00:15:26.919 "code": -17, 00:15:26.919 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:26.919 } 00:15:26.919 16:32:58 -- common/autotest_common.sh@643 -- # es=1 00:15:26.919 16:32:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:26.919 16:32:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:26.919 16:32:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:26.919 16:32:58 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.919 16:32:58 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:27.179 [2024-07-13 16:32:58.615086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:27.179 [2024-07-13 16:32:58.615409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.179 [2024-07-13 16:32:58.615482] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:27.179 [2024-07-13 16:32:58.615572] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.179 [2024-07-13 16:32:58.618491] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.179 [2024-07-13 16:32:58.618643] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:27.179 [2024-07-13 16:32:58.618862] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:27.179 [2024-07-13 16:32:58.618959] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:27.179 pt1 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.179 16:32:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.436 16:32:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:27.436 "name": "raid_bdev1", 00:15:27.436 "uuid": "ea6994a2-89ee-4b1d-8cd0-c9aaa4f54b25", 00:15:27.436 "strip_size_kb": 0, 00:15:27.436 "state": "configuring", 00:15:27.436 "raid_level": "raid1", 00:15:27.436 "superblock": true, 00:15:27.436 "num_base_bdevs": 2, 00:15:27.436 "num_base_bdevs_discovered": 1, 00:15:27.436 "num_base_bdevs_operational": 2, 00:15:27.436 "base_bdevs_list": [ 00:15:27.436 { 00:15:27.436 "name": "pt1", 00:15:27.436 "uuid": "548064ca-3c07-5a13-8678-3e9f26ad7df0", 00:15:27.436 "is_configured": true, 00:15:27.436 "data_offset": 2048, 00:15:27.436 "data_size": 63488 00:15:27.436 }, 00:15:27.436 { 00:15:27.436 "name": null, 00:15:27.436 "uuid": "d03b4cf9-2cd4-5ce2-8250-47b710e9a325", 00:15:27.436 "is_configured": false, 00:15:27.436 "data_offset": 2048, 00:15:27.436 "data_size": 63488 00:15:27.436 } 00:15:27.436 ] 00:15:27.436 }' 00:15:27.436 16:32:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:27.436 16:32:58 -- common/autotest_common.sh@10 -- # set +x 00:15:28.002 16:32:59 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:28.002 16:32:59 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:28.002 16:32:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:28.002 16:32:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:28.261 [2024-07-13 16:32:59.615586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:28.261 [2024-07-13 16:32:59.615933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.261 [2024-07-13 16:32:59.616007] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:15:28.261 [2024-07-13 16:32:59.616102] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.261 [2024-07-13 16:32:59.616631] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.261 [2024-07-13 16:32:59.616778] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:28.261 [2024-07-13 16:32:59.616954] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:28.261 [2024-07-13 16:32:59.617059] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:28.261 [2024-07-13 16:32:59.617301] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:28.261 [2024-07-13 16:32:59.617404] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:28.261 [2024-07-13 16:32:59.617527] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:28.261 [2024-07-13 16:32:59.617973] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:28.261 [2024-07-13 16:32:59.618076] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:28.261 [2024-07-13 16:32:59.618251] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.261 pt2 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.261 16:32:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.519 16:32:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:28.519 "name": "raid_bdev1", 00:15:28.519 "uuid": "ea6994a2-89ee-4b1d-8cd0-c9aaa4f54b25", 00:15:28.519 "strip_size_kb": 0, 00:15:28.519 "state": "online", 00:15:28.519 "raid_level": "raid1", 00:15:28.519 "superblock": true, 00:15:28.519 "num_base_bdevs": 2, 00:15:28.519 "num_base_bdevs_discovered": 2, 00:15:28.519 "num_base_bdevs_operational": 2, 00:15:28.519 "base_bdevs_list": [ 00:15:28.519 { 00:15:28.519 "name": "pt1", 00:15:28.519 "uuid": "548064ca-3c07-5a13-8678-3e9f26ad7df0", 00:15:28.519 "is_configured": true, 00:15:28.519 "data_offset": 2048, 00:15:28.519 "data_size": 63488 00:15:28.519 }, 00:15:28.519 { 00:15:28.519 "name": "pt2", 00:15:28.519 "uuid": "d03b4cf9-2cd4-5ce2-8250-47b710e9a325", 00:15:28.519 "is_configured": true, 00:15:28.519 "data_offset": 2048, 00:15:28.519 "data_size": 63488 00:15:28.519 } 00:15:28.519 ] 00:15:28.519 }' 00:15:28.519 16:32:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:28.519 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:15:29.086 16:33:00 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:29.086 16:33:00 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:29.355 [2024-07-13 16:33:00.615950] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.355 16:33:00 -- bdev/bdev_raid.sh@430 -- # '[' ea6994a2-89ee-4b1d-8cd0-c9aaa4f54b25 '!=' ea6994a2-89ee-4b1d-8cd0-c9aaa4f54b25 ']' 00:15:29.355 16:33:00 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:15:29.355 16:33:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:29.355 16:33:00 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:29.355 16:33:00 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:29.355 [2024-07-13 16:33:00.799878] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:29.355 16:33:00 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.355 16:33:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:29.355 16:33:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:29.356 16:33:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:29.356 16:33:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:29.356 16:33:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:29.356 16:33:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.356 16:33:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.356 16:33:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.356 16:33:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.356 16:33:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.614 16:33:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.614 16:33:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:29.614 "name": "raid_bdev1", 00:15:29.614 "uuid": "ea6994a2-89ee-4b1d-8cd0-c9aaa4f54b25", 00:15:29.614 "strip_size_kb": 0, 00:15:29.614 "state": "online", 00:15:29.614 "raid_level": "raid1", 00:15:29.614 "superblock": true, 00:15:29.614 "num_base_bdevs": 2, 00:15:29.614 "num_base_bdevs_discovered": 1, 00:15:29.614 "num_base_bdevs_operational": 1, 00:15:29.614 "base_bdevs_list": [ 00:15:29.614 { 00:15:29.614 "name": null, 00:15:29.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.614 "is_configured": false, 00:15:29.614 "data_offset": 2048, 00:15:29.614 "data_size": 63488 00:15:29.614 }, 00:15:29.614 { 00:15:29.614 "name": "pt2", 00:15:29.614 "uuid": "d03b4cf9-2cd4-5ce2-8250-47b710e9a325", 00:15:29.614 "is_configured": true, 00:15:29.614 "data_offset": 2048, 00:15:29.614 "data_size": 63488 00:15:29.614 } 00:15:29.614 ] 00:15:29.614 }' 00:15:29.614 16:33:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:29.614 16:33:01 -- common/autotest_common.sh@10 -- # set +x 00:15:30.181 16:33:01 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:30.439 [2024-07-13 16:33:01.844038] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:30.439 [2024-07-13 16:33:01.844340] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.439 [2024-07-13 16:33:01.844515] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.439 [2024-07-13 16:33:01.844608] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.439 [2024-07-13 16:33:01.844814] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:30.439 16:33:01 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.439 16:33:01 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:15:30.697 16:33:02 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:15:30.697 16:33:02 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:15:30.697 16:33:02 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:15:30.697 16:33:02 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:30.697 16:33:02 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:30.956 16:33:02 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:30.956 16:33:02 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:30.956 16:33:02 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:15:30.956 16:33:02 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:30.956 16:33:02 -- bdev/bdev_raid.sh@462 -- # i=1 00:15:30.956 16:33:02 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:31.215 [2024-07-13 16:33:02.556112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:31.215 [2024-07-13 16:33:02.556500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.215 [2024-07-13 16:33:02.556638] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:31.215 [2024-07-13 16:33:02.556758] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.215 [2024-07-13 16:33:02.559585] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.215 [2024-07-13 16:33:02.559752] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:31.215 [2024-07-13 16:33:02.559934] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:31.215 [2024-07-13 16:33:02.560075] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:31.215 [2024-07-13 16:33:02.560220] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:15:31.215 [2024-07-13 16:33:02.560329] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:31.215 [2024-07-13 16:33:02.560432] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:31.215 [2024-07-13 16:33:02.560855] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:15:31.215 [2024-07-13 16:33:02.560959] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:15:31.215 [2024-07-13 16:33:02.561185] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.215 pt2 00:15:31.215 16:33:02 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:31.215 16:33:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:31.215 16:33:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:31.215 16:33:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:31.215 16:33:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:31.215 16:33:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:31.215 16:33:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.215 16:33:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.215 16:33:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.215 16:33:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.215 16:33:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.215 16:33:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.474 16:33:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.474 "name": "raid_bdev1", 00:15:31.474 "uuid": "ea6994a2-89ee-4b1d-8cd0-c9aaa4f54b25", 00:15:31.474 "strip_size_kb": 0, 00:15:31.474 "state": "online", 00:15:31.474 "raid_level": "raid1", 00:15:31.474 "superblock": true, 00:15:31.474 "num_base_bdevs": 2, 00:15:31.474 "num_base_bdevs_discovered": 1, 00:15:31.474 "num_base_bdevs_operational": 1, 00:15:31.474 "base_bdevs_list": [ 00:15:31.475 { 00:15:31.475 "name": null, 00:15:31.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.475 "is_configured": false, 00:15:31.475 "data_offset": 2048, 00:15:31.475 "data_size": 63488 00:15:31.475 }, 00:15:31.475 { 00:15:31.475 "name": "pt2", 00:15:31.475 "uuid": "d03b4cf9-2cd4-5ce2-8250-47b710e9a325", 00:15:31.475 "is_configured": true, 00:15:31.475 "data_offset": 2048, 00:15:31.475 "data_size": 63488 00:15:31.475 } 00:15:31.475 ] 00:15:31.475 }' 00:15:31.475 16:33:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.475 16:33:02 -- common/autotest_common.sh@10 -- # set +x 00:15:32.043 16:33:03 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:15:32.043 16:33:03 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:32.043 16:33:03 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:15:32.302 [2024-07-13 16:33:03.536812] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.302 16:33:03 -- bdev/bdev_raid.sh@506 -- # '[' ea6994a2-89ee-4b1d-8cd0-c9aaa4f54b25 '!=' ea6994a2-89ee-4b1d-8cd0-c9aaa4f54b25 ']' 00:15:32.302 16:33:03 -- bdev/bdev_raid.sh@511 -- # killprocess 125226 00:15:32.302 16:33:03 -- common/autotest_common.sh@926 -- # '[' -z 125226 ']' 00:15:32.302 16:33:03 -- common/autotest_common.sh@930 -- # kill -0 125226 00:15:32.302 16:33:03 -- common/autotest_common.sh@931 -- # uname 00:15:32.302 16:33:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:32.302 16:33:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125226 00:15:32.302 16:33:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:32.302 16:33:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:32.302 16:33:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125226' 00:15:32.302 killing process with pid 125226 00:15:32.302 16:33:03 -- common/autotest_common.sh@945 -- # kill 125226 00:15:32.302 16:33:03 -- common/autotest_common.sh@950 -- # wait 125226 00:15:32.302 [2024-07-13 16:33:03.591964] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:32.302 [2024-07-13 16:33:03.592074] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.302 [2024-07-13 16:33:03.592189] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.302 [2024-07-13 16:33:03.592354] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:15:32.302 [2024-07-13 16:33:03.636628] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.871 ************************************ 00:15:32.871 END TEST raid_superblock_test 00:15:32.871 ************************************ 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:32.871 00:15:32.871 real 0m10.180s 00:15:32.871 user 0m17.869s 00:15:32.871 sys 0m1.957s 00:15:32.871 16:33:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.871 16:33:04 -- common/autotest_common.sh@10 -- # set +x 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:15:32.871 16:33:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:32.871 16:33:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:32.871 16:33:04 -- common/autotest_common.sh@10 -- # set +x 00:15:32.871 ************************************ 00:15:32.871 START TEST raid_state_function_test 00:15:32.871 ************************************ 00:15:32.871 16:33:04 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=125565 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125565' 00:15:32.871 Process raid pid: 125565 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125565 /var/tmp/spdk-raid.sock 00:15:32.871 16:33:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:32.871 16:33:04 -- common/autotest_common.sh@819 -- # '[' -z 125565 ']' 00:15:32.871 16:33:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:32.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:32.871 16:33:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:32.871 16:33:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:32.871 16:33:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:32.871 16:33:04 -- common/autotest_common.sh@10 -- # set +x 00:15:32.871 [2024-07-13 16:33:04.209931] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:32.871 [2024-07-13 16:33:04.211242] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.144 [2024-07-13 16:33:04.369418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.144 [2024-07-13 16:33:04.462483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.144 [2024-07-13 16:33:04.542582] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.714 16:33:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:33.714 16:33:05 -- common/autotest_common.sh@852 -- # return 0 00:15:33.714 16:33:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:33.972 [2024-07-13 16:33:05.407123] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.972 [2024-07-13 16:33:05.407476] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.972 [2024-07-13 16:33:05.407563] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.972 [2024-07-13 16:33:05.407618] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.972 [2024-07-13 16:33:05.407645] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.972 [2024-07-13 16:33:05.407718] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.972 16:33:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:33.972 16:33:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:33.972 16:33:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:33.972 16:33:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:33.972 16:33:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:33.973 16:33:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:33.973 16:33:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:33.973 16:33:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:33.973 16:33:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:33.973 16:33:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:33.973 16:33:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.973 16:33:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.230 16:33:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.230 "name": "Existed_Raid", 00:15:34.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.230 "strip_size_kb": 64, 00:15:34.230 "state": "configuring", 00:15:34.230 "raid_level": "raid0", 00:15:34.230 "superblock": false, 00:15:34.230 "num_base_bdevs": 3, 00:15:34.230 "num_base_bdevs_discovered": 0, 00:15:34.230 "num_base_bdevs_operational": 3, 00:15:34.230 "base_bdevs_list": [ 00:15:34.230 { 00:15:34.230 "name": "BaseBdev1", 00:15:34.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.230 "is_configured": false, 00:15:34.230 "data_offset": 0, 00:15:34.230 "data_size": 0 00:15:34.230 }, 00:15:34.230 { 00:15:34.230 "name": "BaseBdev2", 00:15:34.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.230 "is_configured": false, 00:15:34.230 "data_offset": 0, 00:15:34.230 "data_size": 0 00:15:34.230 }, 00:15:34.230 { 00:15:34.230 "name": "BaseBdev3", 00:15:34.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.230 "is_configured": false, 00:15:34.230 "data_offset": 0, 00:15:34.230 "data_size": 0 00:15:34.230 } 00:15:34.230 ] 00:15:34.230 }' 00:15:34.230 16:33:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.230 16:33:05 -- common/autotest_common.sh@10 -- # set +x 00:15:34.798 16:33:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:35.057 [2024-07-13 16:33:06.427158] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.057 [2024-07-13 16:33:06.427474] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:35.057 16:33:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:35.316 [2024-07-13 16:33:06.691275] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:35.316 [2024-07-13 16:33:06.691655] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:35.316 [2024-07-13 16:33:06.691741] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.316 [2024-07-13 16:33:06.691802] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.316 [2024-07-13 16:33:06.691828] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:35.316 [2024-07-13 16:33:06.691886] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:35.316 16:33:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:35.575 [2024-07-13 16:33:06.967568] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.575 BaseBdev1 00:15:35.575 16:33:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:35.575 16:33:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:35.575 16:33:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:35.575 16:33:06 -- common/autotest_common.sh@889 -- # local i 00:15:35.575 16:33:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:35.575 16:33:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:35.575 16:33:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:35.833 16:33:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:36.092 [ 00:15:36.092 { 00:15:36.092 "name": "BaseBdev1", 00:15:36.092 "aliases": [ 00:15:36.092 "bbae646f-791d-4d50-9e01-6a26ec6c404e" 00:15:36.092 ], 00:15:36.092 "product_name": "Malloc disk", 00:15:36.092 "block_size": 512, 00:15:36.092 "num_blocks": 65536, 00:15:36.092 "uuid": "bbae646f-791d-4d50-9e01-6a26ec6c404e", 00:15:36.092 "assigned_rate_limits": { 00:15:36.092 "rw_ios_per_sec": 0, 00:15:36.092 "rw_mbytes_per_sec": 0, 00:15:36.092 "r_mbytes_per_sec": 0, 00:15:36.092 "w_mbytes_per_sec": 0 00:15:36.092 }, 00:15:36.092 "claimed": true, 00:15:36.092 "claim_type": "exclusive_write", 00:15:36.092 "zoned": false, 00:15:36.092 "supported_io_types": { 00:15:36.092 "read": true, 00:15:36.092 "write": true, 00:15:36.092 "unmap": true, 00:15:36.092 "write_zeroes": true, 00:15:36.092 "flush": true, 00:15:36.092 "reset": true, 00:15:36.092 "compare": false, 00:15:36.092 "compare_and_write": false, 00:15:36.092 "abort": true, 00:15:36.092 "nvme_admin": false, 00:15:36.092 "nvme_io": false 00:15:36.092 }, 00:15:36.092 "memory_domains": [ 00:15:36.092 { 00:15:36.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.092 "dma_device_type": 2 00:15:36.092 } 00:15:36.092 ], 00:15:36.092 "driver_specific": {} 00:15:36.092 } 00:15:36.092 ] 00:15:36.092 16:33:07 -- common/autotest_common.sh@895 -- # return 0 00:15:36.092 16:33:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:36.092 16:33:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:36.092 16:33:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:36.092 16:33:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:36.092 16:33:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:36.092 16:33:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:36.092 16:33:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.092 16:33:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.092 16:33:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.092 16:33:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.092 16:33:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.092 16:33:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.351 16:33:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.351 "name": "Existed_Raid", 00:15:36.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.351 "strip_size_kb": 64, 00:15:36.351 "state": "configuring", 00:15:36.351 "raid_level": "raid0", 00:15:36.351 "superblock": false, 00:15:36.351 "num_base_bdevs": 3, 00:15:36.351 "num_base_bdevs_discovered": 1, 00:15:36.351 "num_base_bdevs_operational": 3, 00:15:36.351 "base_bdevs_list": [ 00:15:36.351 { 00:15:36.351 "name": "BaseBdev1", 00:15:36.351 "uuid": "bbae646f-791d-4d50-9e01-6a26ec6c404e", 00:15:36.351 "is_configured": true, 00:15:36.351 "data_offset": 0, 00:15:36.351 "data_size": 65536 00:15:36.351 }, 00:15:36.351 { 00:15:36.351 "name": "BaseBdev2", 00:15:36.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.351 "is_configured": false, 00:15:36.351 "data_offset": 0, 00:15:36.351 "data_size": 0 00:15:36.351 }, 00:15:36.351 { 00:15:36.351 "name": "BaseBdev3", 00:15:36.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.351 "is_configured": false, 00:15:36.351 "data_offset": 0, 00:15:36.351 "data_size": 0 00:15:36.351 } 00:15:36.351 ] 00:15:36.351 }' 00:15:36.351 16:33:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.351 16:33:07 -- common/autotest_common.sh@10 -- # set +x 00:15:36.918 16:33:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:37.204 [2024-07-13 16:33:08.495910] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.205 [2024-07-13 16:33:08.496210] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:37.205 16:33:08 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:37.205 16:33:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:37.482 [2024-07-13 16:33:08.704089] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.482 [2024-07-13 16:33:08.706880] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.482 [2024-07-13 16:33:08.707088] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.482 [2024-07-13 16:33:08.707172] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.482 [2024-07-13 16:33:08.707234] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.482 16:33:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.482 "name": "Existed_Raid", 00:15:37.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.482 "strip_size_kb": 64, 00:15:37.482 "state": "configuring", 00:15:37.482 "raid_level": "raid0", 00:15:37.482 "superblock": false, 00:15:37.482 "num_base_bdevs": 3, 00:15:37.482 "num_base_bdevs_discovered": 1, 00:15:37.482 "num_base_bdevs_operational": 3, 00:15:37.482 "base_bdevs_list": [ 00:15:37.482 { 00:15:37.482 "name": "BaseBdev1", 00:15:37.482 "uuid": "bbae646f-791d-4d50-9e01-6a26ec6c404e", 00:15:37.482 "is_configured": true, 00:15:37.482 "data_offset": 0, 00:15:37.482 "data_size": 65536 00:15:37.482 }, 00:15:37.482 { 00:15:37.482 "name": "BaseBdev2", 00:15:37.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.483 "is_configured": false, 00:15:37.483 "data_offset": 0, 00:15:37.483 "data_size": 0 00:15:37.483 }, 00:15:37.483 { 00:15:37.483 "name": "BaseBdev3", 00:15:37.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.483 "is_configured": false, 00:15:37.483 "data_offset": 0, 00:15:37.483 "data_size": 0 00:15:37.483 } 00:15:37.483 ] 00:15:37.483 }' 00:15:37.483 16:33:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.483 16:33:08 -- common/autotest_common.sh@10 -- # set +x 00:15:38.049 16:33:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.617 [2024-07-13 16:33:09.782272] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.617 BaseBdev2 00:15:38.617 16:33:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:38.617 16:33:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:38.617 16:33:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:38.617 16:33:09 -- common/autotest_common.sh@889 -- # local i 00:15:38.617 16:33:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:38.617 16:33:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:38.617 16:33:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.617 16:33:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:38.876 [ 00:15:38.876 { 00:15:38.876 "name": "BaseBdev2", 00:15:38.876 "aliases": [ 00:15:38.876 "a775e9f7-8c47-44cf-b966-35fa57620087" 00:15:38.876 ], 00:15:38.876 "product_name": "Malloc disk", 00:15:38.876 "block_size": 512, 00:15:38.876 "num_blocks": 65536, 00:15:38.876 "uuid": "a775e9f7-8c47-44cf-b966-35fa57620087", 00:15:38.876 "assigned_rate_limits": { 00:15:38.876 "rw_ios_per_sec": 0, 00:15:38.876 "rw_mbytes_per_sec": 0, 00:15:38.876 "r_mbytes_per_sec": 0, 00:15:38.876 "w_mbytes_per_sec": 0 00:15:38.876 }, 00:15:38.876 "claimed": true, 00:15:38.876 "claim_type": "exclusive_write", 00:15:38.876 "zoned": false, 00:15:38.876 "supported_io_types": { 00:15:38.876 "read": true, 00:15:38.876 "write": true, 00:15:38.876 "unmap": true, 00:15:38.876 "write_zeroes": true, 00:15:38.876 "flush": true, 00:15:38.876 "reset": true, 00:15:38.876 "compare": false, 00:15:38.876 "compare_and_write": false, 00:15:38.876 "abort": true, 00:15:38.876 "nvme_admin": false, 00:15:38.876 "nvme_io": false 00:15:38.876 }, 00:15:38.876 "memory_domains": [ 00:15:38.876 { 00:15:38.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.876 "dma_device_type": 2 00:15:38.876 } 00:15:38.876 ], 00:15:38.876 "driver_specific": {} 00:15:38.876 } 00:15:38.876 ] 00:15:38.876 16:33:10 -- common/autotest_common.sh@895 -- # return 0 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.876 16:33:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.135 16:33:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:39.135 "name": "Existed_Raid", 00:15:39.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.135 "strip_size_kb": 64, 00:15:39.135 "state": "configuring", 00:15:39.135 "raid_level": "raid0", 00:15:39.135 "superblock": false, 00:15:39.135 "num_base_bdevs": 3, 00:15:39.135 "num_base_bdevs_discovered": 2, 00:15:39.135 "num_base_bdevs_operational": 3, 00:15:39.135 "base_bdevs_list": [ 00:15:39.135 { 00:15:39.135 "name": "BaseBdev1", 00:15:39.135 "uuid": "bbae646f-791d-4d50-9e01-6a26ec6c404e", 00:15:39.135 "is_configured": true, 00:15:39.135 "data_offset": 0, 00:15:39.135 "data_size": 65536 00:15:39.135 }, 00:15:39.135 { 00:15:39.135 "name": "BaseBdev2", 00:15:39.135 "uuid": "a775e9f7-8c47-44cf-b966-35fa57620087", 00:15:39.135 "is_configured": true, 00:15:39.135 "data_offset": 0, 00:15:39.135 "data_size": 65536 00:15:39.135 }, 00:15:39.135 { 00:15:39.135 "name": "BaseBdev3", 00:15:39.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.135 "is_configured": false, 00:15:39.135 "data_offset": 0, 00:15:39.135 "data_size": 0 00:15:39.135 } 00:15:39.135 ] 00:15:39.135 }' 00:15:39.135 16:33:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:39.135 16:33:10 -- common/autotest_common.sh@10 -- # set +x 00:15:39.704 16:33:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:39.963 [2024-07-13 16:33:11.342309] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.963 [2024-07-13 16:33:11.342623] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:15:39.963 [2024-07-13 16:33:11.342667] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:39.963 [2024-07-13 16:33:11.342932] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:15:39.963 [2024-07-13 16:33:11.343480] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:15:39.963 [2024-07-13 16:33:11.343595] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:15:39.963 [2024-07-13 16:33:11.343951] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.963 BaseBdev3 00:15:39.963 16:33:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:39.963 16:33:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:39.963 16:33:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:39.963 16:33:11 -- common/autotest_common.sh@889 -- # local i 00:15:39.963 16:33:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:39.963 16:33:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:39.963 16:33:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:40.222 16:33:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:40.481 [ 00:15:40.481 { 00:15:40.481 "name": "BaseBdev3", 00:15:40.481 "aliases": [ 00:15:40.481 "f662137b-65da-4a95-a9ab-2455273b9c54" 00:15:40.481 ], 00:15:40.481 "product_name": "Malloc disk", 00:15:40.481 "block_size": 512, 00:15:40.481 "num_blocks": 65536, 00:15:40.481 "uuid": "f662137b-65da-4a95-a9ab-2455273b9c54", 00:15:40.481 "assigned_rate_limits": { 00:15:40.481 "rw_ios_per_sec": 0, 00:15:40.481 "rw_mbytes_per_sec": 0, 00:15:40.481 "r_mbytes_per_sec": 0, 00:15:40.481 "w_mbytes_per_sec": 0 00:15:40.481 }, 00:15:40.481 "claimed": true, 00:15:40.481 "claim_type": "exclusive_write", 00:15:40.481 "zoned": false, 00:15:40.481 "supported_io_types": { 00:15:40.481 "read": true, 00:15:40.481 "write": true, 00:15:40.481 "unmap": true, 00:15:40.481 "write_zeroes": true, 00:15:40.481 "flush": true, 00:15:40.481 "reset": true, 00:15:40.481 "compare": false, 00:15:40.481 "compare_and_write": false, 00:15:40.481 "abort": true, 00:15:40.481 "nvme_admin": false, 00:15:40.481 "nvme_io": false 00:15:40.481 }, 00:15:40.481 "memory_domains": [ 00:15:40.481 { 00:15:40.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.481 "dma_device_type": 2 00:15:40.481 } 00:15:40.481 ], 00:15:40.481 "driver_specific": {} 00:15:40.481 } 00:15:40.481 ] 00:15:40.481 16:33:11 -- common/autotest_common.sh@895 -- # return 0 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.481 16:33:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.740 16:33:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:40.740 "name": "Existed_Raid", 00:15:40.740 "uuid": "4b4f0085-2c36-44bc-9b1a-b76f816a5163", 00:15:40.740 "strip_size_kb": 64, 00:15:40.740 "state": "online", 00:15:40.740 "raid_level": "raid0", 00:15:40.740 "superblock": false, 00:15:40.740 "num_base_bdevs": 3, 00:15:40.740 "num_base_bdevs_discovered": 3, 00:15:40.740 "num_base_bdevs_operational": 3, 00:15:40.740 "base_bdevs_list": [ 00:15:40.740 { 00:15:40.740 "name": "BaseBdev1", 00:15:40.740 "uuid": "bbae646f-791d-4d50-9e01-6a26ec6c404e", 00:15:40.740 "is_configured": true, 00:15:40.740 "data_offset": 0, 00:15:40.740 "data_size": 65536 00:15:40.740 }, 00:15:40.740 { 00:15:40.740 "name": "BaseBdev2", 00:15:40.740 "uuid": "a775e9f7-8c47-44cf-b966-35fa57620087", 00:15:40.740 "is_configured": true, 00:15:40.740 "data_offset": 0, 00:15:40.740 "data_size": 65536 00:15:40.740 }, 00:15:40.740 { 00:15:40.740 "name": "BaseBdev3", 00:15:40.740 "uuid": "f662137b-65da-4a95-a9ab-2455273b9c54", 00:15:40.740 "is_configured": true, 00:15:40.740 "data_offset": 0, 00:15:40.740 "data_size": 65536 00:15:40.740 } 00:15:40.740 ] 00:15:40.740 }' 00:15:40.740 16:33:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:40.740 16:33:12 -- common/autotest_common.sh@10 -- # set +x 00:15:41.308 16:33:12 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:41.308 [2024-07-13 16:33:12.760981] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.308 [2024-07-13 16:33:12.761265] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.308 [2024-07-13 16:33:12.761493] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.572 16:33:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.573 "name": "Existed_Raid", 00:15:41.573 "uuid": "4b4f0085-2c36-44bc-9b1a-b76f816a5163", 00:15:41.573 "strip_size_kb": 64, 00:15:41.573 "state": "offline", 00:15:41.573 "raid_level": "raid0", 00:15:41.573 "superblock": false, 00:15:41.573 "num_base_bdevs": 3, 00:15:41.573 "num_base_bdevs_discovered": 2, 00:15:41.573 "num_base_bdevs_operational": 2, 00:15:41.573 "base_bdevs_list": [ 00:15:41.573 { 00:15:41.573 "name": null, 00:15:41.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.573 "is_configured": false, 00:15:41.573 "data_offset": 0, 00:15:41.573 "data_size": 65536 00:15:41.573 }, 00:15:41.573 { 00:15:41.573 "name": "BaseBdev2", 00:15:41.573 "uuid": "a775e9f7-8c47-44cf-b966-35fa57620087", 00:15:41.573 "is_configured": true, 00:15:41.573 "data_offset": 0, 00:15:41.573 "data_size": 65536 00:15:41.573 }, 00:15:41.573 { 00:15:41.573 "name": "BaseBdev3", 00:15:41.573 "uuid": "f662137b-65da-4a95-a9ab-2455273b9c54", 00:15:41.573 "is_configured": true, 00:15:41.573 "data_offset": 0, 00:15:41.573 "data_size": 65536 00:15:41.573 } 00:15:41.573 ] 00:15:41.573 }' 00:15:41.573 16:33:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.573 16:33:12 -- common/autotest_common.sh@10 -- # set +x 00:15:42.137 16:33:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:42.137 16:33:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:42.137 16:33:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.137 16:33:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:42.702 16:33:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:42.702 16:33:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.702 16:33:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:42.702 [2024-07-13 16:33:14.120885] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.703 16:33:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:42.703 16:33:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:42.703 16:33:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.703 16:33:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:42.960 16:33:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:42.960 16:33:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.960 16:33:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:43.218 [2024-07-13 16:33:14.550968] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.218 [2024-07-13 16:33:14.551276] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:15:43.218 16:33:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:43.218 16:33:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:43.218 16:33:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:43.218 16:33:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.476 16:33:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:43.476 16:33:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:43.476 16:33:14 -- bdev/bdev_raid.sh@287 -- # killprocess 125565 00:15:43.476 16:33:14 -- common/autotest_common.sh@926 -- # '[' -z 125565 ']' 00:15:43.476 16:33:14 -- common/autotest_common.sh@930 -- # kill -0 125565 00:15:43.476 16:33:14 -- common/autotest_common.sh@931 -- # uname 00:15:43.476 16:33:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:43.476 16:33:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125565 00:15:43.476 16:33:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:43.476 16:33:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:43.476 16:33:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125565' 00:15:43.476 killing process with pid 125565 00:15:43.476 16:33:14 -- common/autotest_common.sh@945 -- # kill 125565 00:15:43.476 16:33:14 -- common/autotest_common.sh@950 -- # wait 125565 00:15:43.476 [2024-07-13 16:33:14.870017] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.476 [2024-07-13 16:33:14.870124] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.042 ************************************ 00:15:44.042 END TEST raid_state_function_test 00:15:44.042 ************************************ 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:44.042 00:15:44.042 real 0m11.154s 00:15:44.042 user 0m19.564s 00:15:44.042 sys 0m1.996s 00:15:44.042 16:33:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.042 16:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:15:44.042 16:33:15 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:44.042 16:33:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:44.042 16:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:44.042 ************************************ 00:15:44.042 START TEST raid_state_function_test_sb 00:15:44.042 ************************************ 00:15:44.042 16:33:15 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@226 -- # raid_pid=125936 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125936' 00:15:44.042 Process raid pid: 125936 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:44.042 16:33:15 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125936 /var/tmp/spdk-raid.sock 00:15:44.042 16:33:15 -- common/autotest_common.sh@819 -- # '[' -z 125936 ']' 00:15:44.042 16:33:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:44.042 16:33:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:44.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:44.042 16:33:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:44.042 16:33:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:44.042 16:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:44.042 [2024-07-13 16:33:15.442079] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:44.042 [2024-07-13 16:33:15.442552] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.299 [2024-07-13 16:33:15.601787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.299 [2024-07-13 16:33:15.695872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.555 [2024-07-13 16:33:15.778298] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.121 16:33:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:45.121 16:33:16 -- common/autotest_common.sh@852 -- # return 0 00:15:45.121 16:33:16 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:45.121 [2024-07-13 16:33:16.544216] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.121 [2024-07-13 16:33:16.544573] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.121 [2024-07-13 16:33:16.544667] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.121 [2024-07-13 16:33:16.544722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.121 [2024-07-13 16:33:16.544803] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.121 [2024-07-13 16:33:16.544890] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.121 16:33:16 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:45.121 16:33:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:45.121 16:33:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:45.121 16:33:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:45.121 16:33:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:45.121 16:33:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:45.121 16:33:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.121 16:33:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.121 16:33:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.121 16:33:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.121 16:33:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.121 16:33:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.379 16:33:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.379 "name": "Existed_Raid", 00:15:45.379 "uuid": "48271289-367d-43bc-924c-3f612bded738", 00:15:45.379 "strip_size_kb": 64, 00:15:45.379 "state": "configuring", 00:15:45.379 "raid_level": "raid0", 00:15:45.379 "superblock": true, 00:15:45.379 "num_base_bdevs": 3, 00:15:45.379 "num_base_bdevs_discovered": 0, 00:15:45.379 "num_base_bdevs_operational": 3, 00:15:45.379 "base_bdevs_list": [ 00:15:45.379 { 00:15:45.379 "name": "BaseBdev1", 00:15:45.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.379 "is_configured": false, 00:15:45.379 "data_offset": 0, 00:15:45.379 "data_size": 0 00:15:45.379 }, 00:15:45.379 { 00:15:45.379 "name": "BaseBdev2", 00:15:45.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.379 "is_configured": false, 00:15:45.379 "data_offset": 0, 00:15:45.379 "data_size": 0 00:15:45.379 }, 00:15:45.379 { 00:15:45.379 "name": "BaseBdev3", 00:15:45.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.379 "is_configured": false, 00:15:45.379 "data_offset": 0, 00:15:45.379 "data_size": 0 00:15:45.379 } 00:15:45.379 ] 00:15:45.379 }' 00:15:45.379 16:33:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.379 16:33:16 -- common/autotest_common.sh@10 -- # set +x 00:15:45.946 16:33:17 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:46.205 [2024-07-13 16:33:17.656210] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.205 [2024-07-13 16:33:17.656466] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:46.463 16:33:17 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:46.463 [2024-07-13 16:33:17.928371] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.463 [2024-07-13 16:33:17.928685] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.463 [2024-07-13 16:33:17.928778] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.463 [2024-07-13 16:33:17.928838] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.463 [2024-07-13 16:33:17.928865] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:46.463 [2024-07-13 16:33:17.928911] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:46.722 16:33:17 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:46.722 [2024-07-13 16:33:18.140701] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.722 BaseBdev1 00:15:46.722 16:33:18 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:46.722 16:33:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:46.722 16:33:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:46.722 16:33:18 -- common/autotest_common.sh@889 -- # local i 00:15:46.722 16:33:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:46.722 16:33:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:46.722 16:33:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:46.980 16:33:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:47.239 [ 00:15:47.239 { 00:15:47.239 "name": "BaseBdev1", 00:15:47.239 "aliases": [ 00:15:47.239 "652b6fa0-329c-4ae5-b4c6-61884578ca18" 00:15:47.239 ], 00:15:47.239 "product_name": "Malloc disk", 00:15:47.239 "block_size": 512, 00:15:47.239 "num_blocks": 65536, 00:15:47.239 "uuid": "652b6fa0-329c-4ae5-b4c6-61884578ca18", 00:15:47.239 "assigned_rate_limits": { 00:15:47.239 "rw_ios_per_sec": 0, 00:15:47.239 "rw_mbytes_per_sec": 0, 00:15:47.239 "r_mbytes_per_sec": 0, 00:15:47.239 "w_mbytes_per_sec": 0 00:15:47.239 }, 00:15:47.239 "claimed": true, 00:15:47.239 "claim_type": "exclusive_write", 00:15:47.239 "zoned": false, 00:15:47.239 "supported_io_types": { 00:15:47.239 "read": true, 00:15:47.239 "write": true, 00:15:47.239 "unmap": true, 00:15:47.239 "write_zeroes": true, 00:15:47.239 "flush": true, 00:15:47.239 "reset": true, 00:15:47.239 "compare": false, 00:15:47.239 "compare_and_write": false, 00:15:47.239 "abort": true, 00:15:47.239 "nvme_admin": false, 00:15:47.239 "nvme_io": false 00:15:47.239 }, 00:15:47.239 "memory_domains": [ 00:15:47.239 { 00:15:47.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.239 "dma_device_type": 2 00:15:47.239 } 00:15:47.239 ], 00:15:47.239 "driver_specific": {} 00:15:47.239 } 00:15:47.239 ] 00:15:47.239 16:33:18 -- common/autotest_common.sh@895 -- # return 0 00:15:47.239 16:33:18 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:47.239 16:33:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:47.239 16:33:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:47.239 16:33:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:47.239 16:33:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:47.239 16:33:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:47.239 16:33:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:47.239 16:33:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:47.239 16:33:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:47.239 16:33:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:47.239 16:33:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.239 16:33:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.498 16:33:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.498 "name": "Existed_Raid", 00:15:47.498 "uuid": "dfd7d28d-71cd-4e31-9148-e96cb5009ad8", 00:15:47.498 "strip_size_kb": 64, 00:15:47.498 "state": "configuring", 00:15:47.498 "raid_level": "raid0", 00:15:47.498 "superblock": true, 00:15:47.498 "num_base_bdevs": 3, 00:15:47.498 "num_base_bdevs_discovered": 1, 00:15:47.498 "num_base_bdevs_operational": 3, 00:15:47.498 "base_bdevs_list": [ 00:15:47.498 { 00:15:47.498 "name": "BaseBdev1", 00:15:47.498 "uuid": "652b6fa0-329c-4ae5-b4c6-61884578ca18", 00:15:47.498 "is_configured": true, 00:15:47.498 "data_offset": 2048, 00:15:47.498 "data_size": 63488 00:15:47.498 }, 00:15:47.498 { 00:15:47.498 "name": "BaseBdev2", 00:15:47.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.498 "is_configured": false, 00:15:47.498 "data_offset": 0, 00:15:47.498 "data_size": 0 00:15:47.498 }, 00:15:47.498 { 00:15:47.498 "name": "BaseBdev3", 00:15:47.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.498 "is_configured": false, 00:15:47.498 "data_offset": 0, 00:15:47.498 "data_size": 0 00:15:47.498 } 00:15:47.498 ] 00:15:47.498 }' 00:15:47.498 16:33:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.498 16:33:18 -- common/autotest_common.sh@10 -- # set +x 00:15:48.064 16:33:19 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:48.064 [2024-07-13 16:33:19.476949] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.064 [2024-07-13 16:33:19.477311] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:48.064 16:33:19 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:48.064 16:33:19 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:48.323 16:33:19 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:48.581 BaseBdev1 00:15:48.581 16:33:19 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:48.581 16:33:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:48.581 16:33:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:48.581 16:33:19 -- common/autotest_common.sh@889 -- # local i 00:15:48.581 16:33:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:48.581 16:33:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:48.581 16:33:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:48.839 16:33:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.098 [ 00:15:49.098 { 00:15:49.098 "name": "BaseBdev1", 00:15:49.098 "aliases": [ 00:15:49.098 "9425d184-a319-43b6-86c9-ec3e9012f2c0" 00:15:49.098 ], 00:15:49.098 "product_name": "Malloc disk", 00:15:49.098 "block_size": 512, 00:15:49.098 "num_blocks": 65536, 00:15:49.098 "uuid": "9425d184-a319-43b6-86c9-ec3e9012f2c0", 00:15:49.098 "assigned_rate_limits": { 00:15:49.098 "rw_ios_per_sec": 0, 00:15:49.098 "rw_mbytes_per_sec": 0, 00:15:49.098 "r_mbytes_per_sec": 0, 00:15:49.098 "w_mbytes_per_sec": 0 00:15:49.098 }, 00:15:49.098 "claimed": false, 00:15:49.098 "zoned": false, 00:15:49.098 "supported_io_types": { 00:15:49.098 "read": true, 00:15:49.098 "write": true, 00:15:49.098 "unmap": true, 00:15:49.098 "write_zeroes": true, 00:15:49.098 "flush": true, 00:15:49.098 "reset": true, 00:15:49.098 "compare": false, 00:15:49.098 "compare_and_write": false, 00:15:49.098 "abort": true, 00:15:49.098 "nvme_admin": false, 00:15:49.098 "nvme_io": false 00:15:49.098 }, 00:15:49.098 "memory_domains": [ 00:15:49.098 { 00:15:49.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.098 "dma_device_type": 2 00:15:49.098 } 00:15:49.098 ], 00:15:49.098 "driver_specific": {} 00:15:49.098 } 00:15:49.098 ] 00:15:49.098 16:33:20 -- common/autotest_common.sh@895 -- # return 0 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:49.098 [2024-07-13 16:33:20.526921] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.098 [2024-07-13 16:33:20.529680] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.098 [2024-07-13 16:33:20.529877] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.098 [2024-07-13 16:33:20.529954] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.098 [2024-07-13 16:33:20.530014] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.098 16:33:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.356 16:33:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:49.356 "name": "Existed_Raid", 00:15:49.356 "uuid": "06a1a858-d8b6-4179-a7ee-14907f81eca2", 00:15:49.356 "strip_size_kb": 64, 00:15:49.356 "state": "configuring", 00:15:49.356 "raid_level": "raid0", 00:15:49.356 "superblock": true, 00:15:49.356 "num_base_bdevs": 3, 00:15:49.356 "num_base_bdevs_discovered": 1, 00:15:49.356 "num_base_bdevs_operational": 3, 00:15:49.356 "base_bdevs_list": [ 00:15:49.356 { 00:15:49.356 "name": "BaseBdev1", 00:15:49.356 "uuid": "9425d184-a319-43b6-86c9-ec3e9012f2c0", 00:15:49.356 "is_configured": true, 00:15:49.356 "data_offset": 2048, 00:15:49.356 "data_size": 63488 00:15:49.356 }, 00:15:49.356 { 00:15:49.356 "name": "BaseBdev2", 00:15:49.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.356 "is_configured": false, 00:15:49.356 "data_offset": 0, 00:15:49.356 "data_size": 0 00:15:49.356 }, 00:15:49.356 { 00:15:49.356 "name": "BaseBdev3", 00:15:49.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.357 "is_configured": false, 00:15:49.357 "data_offset": 0, 00:15:49.357 "data_size": 0 00:15:49.357 } 00:15:49.357 ] 00:15:49.357 }' 00:15:49.357 16:33:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:49.357 16:33:20 -- common/autotest_common.sh@10 -- # set +x 00:15:50.291 16:33:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:50.291 [2024-07-13 16:33:21.690430] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.291 BaseBdev2 00:15:50.291 16:33:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:50.291 16:33:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:50.291 16:33:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:50.291 16:33:21 -- common/autotest_common.sh@889 -- # local i 00:15:50.291 16:33:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:50.291 16:33:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:50.291 16:33:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:50.551 16:33:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:50.810 [ 00:15:50.811 { 00:15:50.811 "name": "BaseBdev2", 00:15:50.811 "aliases": [ 00:15:50.811 "146316ff-cce9-4556-83a1-37cdb764cb12" 00:15:50.811 ], 00:15:50.811 "product_name": "Malloc disk", 00:15:50.811 "block_size": 512, 00:15:50.811 "num_blocks": 65536, 00:15:50.811 "uuid": "146316ff-cce9-4556-83a1-37cdb764cb12", 00:15:50.811 "assigned_rate_limits": { 00:15:50.811 "rw_ios_per_sec": 0, 00:15:50.811 "rw_mbytes_per_sec": 0, 00:15:50.811 "r_mbytes_per_sec": 0, 00:15:50.811 "w_mbytes_per_sec": 0 00:15:50.811 }, 00:15:50.811 "claimed": true, 00:15:50.811 "claim_type": "exclusive_write", 00:15:50.811 "zoned": false, 00:15:50.811 "supported_io_types": { 00:15:50.811 "read": true, 00:15:50.811 "write": true, 00:15:50.811 "unmap": true, 00:15:50.811 "write_zeroes": true, 00:15:50.811 "flush": true, 00:15:50.811 "reset": true, 00:15:50.811 "compare": false, 00:15:50.811 "compare_and_write": false, 00:15:50.811 "abort": true, 00:15:50.811 "nvme_admin": false, 00:15:50.811 "nvme_io": false 00:15:50.811 }, 00:15:50.811 "memory_domains": [ 00:15:50.811 { 00:15:50.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.811 "dma_device_type": 2 00:15:50.811 } 00:15:50.811 ], 00:15:50.811 "driver_specific": {} 00:15:50.811 } 00:15:50.811 ] 00:15:50.811 16:33:22 -- common/autotest_common.sh@895 -- # return 0 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.811 16:33:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.069 16:33:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.069 "name": "Existed_Raid", 00:15:51.069 "uuid": "06a1a858-d8b6-4179-a7ee-14907f81eca2", 00:15:51.069 "strip_size_kb": 64, 00:15:51.069 "state": "configuring", 00:15:51.069 "raid_level": "raid0", 00:15:51.069 "superblock": true, 00:15:51.069 "num_base_bdevs": 3, 00:15:51.069 "num_base_bdevs_discovered": 2, 00:15:51.069 "num_base_bdevs_operational": 3, 00:15:51.069 "base_bdevs_list": [ 00:15:51.069 { 00:15:51.069 "name": "BaseBdev1", 00:15:51.069 "uuid": "9425d184-a319-43b6-86c9-ec3e9012f2c0", 00:15:51.069 "is_configured": true, 00:15:51.069 "data_offset": 2048, 00:15:51.069 "data_size": 63488 00:15:51.069 }, 00:15:51.069 { 00:15:51.070 "name": "BaseBdev2", 00:15:51.070 "uuid": "146316ff-cce9-4556-83a1-37cdb764cb12", 00:15:51.070 "is_configured": true, 00:15:51.070 "data_offset": 2048, 00:15:51.070 "data_size": 63488 00:15:51.070 }, 00:15:51.070 { 00:15:51.070 "name": "BaseBdev3", 00:15:51.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.070 "is_configured": false, 00:15:51.070 "data_offset": 0, 00:15:51.070 "data_size": 0 00:15:51.070 } 00:15:51.070 ] 00:15:51.070 }' 00:15:51.070 16:33:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.070 16:33:22 -- common/autotest_common.sh@10 -- # set +x 00:15:51.634 16:33:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:51.891 [2024-07-13 16:33:23.328403] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.891 [2024-07-13 16:33:23.328969] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:15:51.891 [2024-07-13 16:33:23.329105] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:51.891 [2024-07-13 16:33:23.329328] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:51.891 [2024-07-13 16:33:23.329873] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:15:51.891 [2024-07-13 16:33:23.329986] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:15:51.891 [2024-07-13 16:33:23.330272] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.891 BaseBdev3 00:15:52.148 16:33:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:52.148 16:33:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:52.148 16:33:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:52.148 16:33:23 -- common/autotest_common.sh@889 -- # local i 00:15:52.148 16:33:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:52.148 16:33:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:52.148 16:33:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:52.406 16:33:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:52.665 [ 00:15:52.665 { 00:15:52.665 "name": "BaseBdev3", 00:15:52.665 "aliases": [ 00:15:52.665 "4689cbf4-c216-4476-82a2-8443c41fc1b7" 00:15:52.665 ], 00:15:52.665 "product_name": "Malloc disk", 00:15:52.665 "block_size": 512, 00:15:52.665 "num_blocks": 65536, 00:15:52.665 "uuid": "4689cbf4-c216-4476-82a2-8443c41fc1b7", 00:15:52.665 "assigned_rate_limits": { 00:15:52.665 "rw_ios_per_sec": 0, 00:15:52.665 "rw_mbytes_per_sec": 0, 00:15:52.665 "r_mbytes_per_sec": 0, 00:15:52.665 "w_mbytes_per_sec": 0 00:15:52.665 }, 00:15:52.665 "claimed": true, 00:15:52.665 "claim_type": "exclusive_write", 00:15:52.665 "zoned": false, 00:15:52.665 "supported_io_types": { 00:15:52.665 "read": true, 00:15:52.665 "write": true, 00:15:52.665 "unmap": true, 00:15:52.665 "write_zeroes": true, 00:15:52.665 "flush": true, 00:15:52.665 "reset": true, 00:15:52.665 "compare": false, 00:15:52.665 "compare_and_write": false, 00:15:52.665 "abort": true, 00:15:52.665 "nvme_admin": false, 00:15:52.665 "nvme_io": false 00:15:52.665 }, 00:15:52.665 "memory_domains": [ 00:15:52.665 { 00:15:52.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.665 "dma_device_type": 2 00:15:52.665 } 00:15:52.665 ], 00:15:52.665 "driver_specific": {} 00:15:52.665 } 00:15:52.665 ] 00:15:52.665 16:33:23 -- common/autotest_common.sh@895 -- # return 0 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.665 16:33:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.924 16:33:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.924 "name": "Existed_Raid", 00:15:52.924 "uuid": "06a1a858-d8b6-4179-a7ee-14907f81eca2", 00:15:52.924 "strip_size_kb": 64, 00:15:52.924 "state": "online", 00:15:52.924 "raid_level": "raid0", 00:15:52.924 "superblock": true, 00:15:52.924 "num_base_bdevs": 3, 00:15:52.924 "num_base_bdevs_discovered": 3, 00:15:52.924 "num_base_bdevs_operational": 3, 00:15:52.924 "base_bdevs_list": [ 00:15:52.924 { 00:15:52.924 "name": "BaseBdev1", 00:15:52.924 "uuid": "9425d184-a319-43b6-86c9-ec3e9012f2c0", 00:15:52.924 "is_configured": true, 00:15:52.924 "data_offset": 2048, 00:15:52.924 "data_size": 63488 00:15:52.924 }, 00:15:52.924 { 00:15:52.924 "name": "BaseBdev2", 00:15:52.924 "uuid": "146316ff-cce9-4556-83a1-37cdb764cb12", 00:15:52.924 "is_configured": true, 00:15:52.924 "data_offset": 2048, 00:15:52.924 "data_size": 63488 00:15:52.924 }, 00:15:52.924 { 00:15:52.924 "name": "BaseBdev3", 00:15:52.924 "uuid": "4689cbf4-c216-4476-82a2-8443c41fc1b7", 00:15:52.924 "is_configured": true, 00:15:52.924 "data_offset": 2048, 00:15:52.924 "data_size": 63488 00:15:52.924 } 00:15:52.924 ] 00:15:52.924 }' 00:15:52.924 16:33:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.924 16:33:24 -- common/autotest_common.sh@10 -- # set +x 00:15:53.497 16:33:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:53.755 [2024-07-13 16:33:24.981011] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:53.755 [2024-07-13 16:33:24.981278] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.755 [2024-07-13 16:33:24.981518] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.755 16:33:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.013 16:33:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.013 "name": "Existed_Raid", 00:15:54.013 "uuid": "06a1a858-d8b6-4179-a7ee-14907f81eca2", 00:15:54.013 "strip_size_kb": 64, 00:15:54.013 "state": "offline", 00:15:54.013 "raid_level": "raid0", 00:15:54.013 "superblock": true, 00:15:54.013 "num_base_bdevs": 3, 00:15:54.013 "num_base_bdevs_discovered": 2, 00:15:54.013 "num_base_bdevs_operational": 2, 00:15:54.013 "base_bdevs_list": [ 00:15:54.013 { 00:15:54.013 "name": null, 00:15:54.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.013 "is_configured": false, 00:15:54.013 "data_offset": 2048, 00:15:54.013 "data_size": 63488 00:15:54.013 }, 00:15:54.013 { 00:15:54.013 "name": "BaseBdev2", 00:15:54.013 "uuid": "146316ff-cce9-4556-83a1-37cdb764cb12", 00:15:54.013 "is_configured": true, 00:15:54.013 "data_offset": 2048, 00:15:54.013 "data_size": 63488 00:15:54.013 }, 00:15:54.013 { 00:15:54.013 "name": "BaseBdev3", 00:15:54.013 "uuid": "4689cbf4-c216-4476-82a2-8443c41fc1b7", 00:15:54.013 "is_configured": true, 00:15:54.013 "data_offset": 2048, 00:15:54.013 "data_size": 63488 00:15:54.013 } 00:15:54.013 ] 00:15:54.013 }' 00:15:54.013 16:33:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.013 16:33:25 -- common/autotest_common.sh@10 -- # set +x 00:15:54.580 16:33:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:54.580 16:33:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:54.580 16:33:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:54.580 16:33:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.580 16:33:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:54.580 16:33:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.580 16:33:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:54.863 [2024-07-13 16:33:26.280802] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.863 16:33:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:55.146 16:33:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:55.146 16:33:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.146 16:33:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:55.146 16:33:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:55.146 16:33:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:55.146 16:33:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:55.404 [2024-07-13 16:33:26.746442] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:55.404 [2024-07-13 16:33:26.746798] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:15:55.404 16:33:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:55.404 16:33:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:55.404 16:33:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.404 16:33:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:55.661 16:33:27 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:55.661 16:33:27 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:55.661 16:33:27 -- bdev/bdev_raid.sh@287 -- # killprocess 125936 00:15:55.661 16:33:27 -- common/autotest_common.sh@926 -- # '[' -z 125936 ']' 00:15:55.661 16:33:27 -- common/autotest_common.sh@930 -- # kill -0 125936 00:15:55.661 16:33:27 -- common/autotest_common.sh@931 -- # uname 00:15:55.661 16:33:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:55.661 16:33:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125936 00:15:55.661 16:33:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:55.661 16:33:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:55.661 16:33:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125936' 00:15:55.661 killing process with pid 125936 00:15:55.661 16:33:27 -- common/autotest_common.sh@945 -- # kill 125936 00:15:55.661 16:33:27 -- common/autotest_common.sh@950 -- # wait 125936 00:15:55.661 [2024-07-13 16:33:27.107954] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.661 [2024-07-13 16:33:27.108066] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:56.226 00:15:56.226 real 0m12.154s 00:15:56.226 user 0m21.328s 00:15:56.226 sys 0m2.186s 00:15:56.226 16:33:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.226 16:33:27 -- common/autotest_common.sh@10 -- # set +x 00:15:56.226 ************************************ 00:15:56.226 END TEST raid_state_function_test_sb 00:15:56.226 ************************************ 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:56.226 16:33:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:56.226 16:33:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:56.226 16:33:27 -- common/autotest_common.sh@10 -- # set +x 00:15:56.226 ************************************ 00:15:56.226 START TEST raid_superblock_test 00:15:56.226 ************************************ 00:15:56.226 16:33:27 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:56.226 16:33:27 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:56.227 16:33:27 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:56.227 16:33:27 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:56.227 16:33:27 -- bdev/bdev_raid.sh@357 -- # raid_pid=126315 00:15:56.227 16:33:27 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:56.227 16:33:27 -- bdev/bdev_raid.sh@358 -- # waitforlisten 126315 /var/tmp/spdk-raid.sock 00:15:56.227 16:33:27 -- common/autotest_common.sh@819 -- # '[' -z 126315 ']' 00:15:56.227 16:33:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:56.227 16:33:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:56.227 16:33:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:56.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:56.227 16:33:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:56.227 16:33:27 -- common/autotest_common.sh@10 -- # set +x 00:15:56.227 [2024-07-13 16:33:27.663846] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:56.227 [2024-07-13 16:33:27.664461] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126315 ] 00:15:56.485 [2024-07-13 16:33:27.825596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.485 [2024-07-13 16:33:27.914708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.745 [2024-07-13 16:33:28.003042] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.312 16:33:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:57.312 16:33:28 -- common/autotest_common.sh@852 -- # return 0 00:15:57.312 16:33:28 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:57.312 16:33:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:57.312 16:33:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:57.312 16:33:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:57.312 16:33:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:57.312 16:33:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.312 16:33:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.312 16:33:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.312 16:33:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:57.571 malloc1 00:15:57.571 16:33:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:57.829 [2024-07-13 16:33:29.064714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:57.829 [2024-07-13 16:33:29.065156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.829 [2024-07-13 16:33:29.065237] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:15:57.829 [2024-07-13 16:33:29.065391] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.829 [2024-07-13 16:33:29.068865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.829 [2024-07-13 16:33:29.069062] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:57.829 pt1 00:15:57.829 16:33:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:57.829 16:33:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:57.829 16:33:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:57.829 16:33:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:57.829 16:33:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:57.829 16:33:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.829 16:33:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.829 16:33:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.829 16:33:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:58.087 malloc2 00:15:58.087 16:33:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.087 [2024-07-13 16:33:29.521800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.087 [2024-07-13 16:33:29.522185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.087 [2024-07-13 16:33:29.522291] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:58.087 [2024-07-13 16:33:29.522435] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.087 [2024-07-13 16:33:29.525406] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.087 [2024-07-13 16:33:29.525607] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.087 pt2 00:15:58.087 16:33:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:58.087 16:33:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:58.087 16:33:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:58.087 16:33:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:58.087 16:33:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:58.087 16:33:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:58.087 16:33:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:58.087 16:33:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:58.087 16:33:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:58.347 malloc3 00:15:58.347 16:33:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:58.606 [2024-07-13 16:33:29.948581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:58.606 [2024-07-13 16:33:29.949038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.606 [2024-07-13 16:33:29.949139] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:58.606 [2024-07-13 16:33:29.949269] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.606 [2024-07-13 16:33:29.952309] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.606 [2024-07-13 16:33:29.952509] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:58.606 pt3 00:15:58.606 16:33:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:58.606 16:33:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:58.606 16:33:29 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:58.865 [2024-07-13 16:33:30.156994] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.865 [2024-07-13 16:33:30.159974] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.865 [2024-07-13 16:33:30.160170] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:58.865 [2024-07-13 16:33:30.160469] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:58.865 [2024-07-13 16:33:30.160588] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:58.865 [2024-07-13 16:33:30.160870] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:15:58.865 [2024-07-13 16:33:30.161427] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:58.865 [2024-07-13 16:33:30.161541] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:15:58.865 [2024-07-13 16:33:30.161883] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.865 16:33:30 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:58.865 16:33:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:58.865 16:33:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:58.865 16:33:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:58.865 16:33:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:58.865 16:33:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:58.865 16:33:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.865 16:33:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.865 16:33:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.865 16:33:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.865 16:33:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.865 16:33:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.123 16:33:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.123 "name": "raid_bdev1", 00:15:59.123 "uuid": "fca83c98-3a16-4bc6-b76b-05a2c754c73e", 00:15:59.123 "strip_size_kb": 64, 00:15:59.123 "state": "online", 00:15:59.123 "raid_level": "raid0", 00:15:59.123 "superblock": true, 00:15:59.123 "num_base_bdevs": 3, 00:15:59.123 "num_base_bdevs_discovered": 3, 00:15:59.123 "num_base_bdevs_operational": 3, 00:15:59.123 "base_bdevs_list": [ 00:15:59.123 { 00:15:59.123 "name": "pt1", 00:15:59.123 "uuid": "c5d05ec5-524d-5a28-9a19-fdc41e3b4860", 00:15:59.123 "is_configured": true, 00:15:59.123 "data_offset": 2048, 00:15:59.123 "data_size": 63488 00:15:59.123 }, 00:15:59.123 { 00:15:59.123 "name": "pt2", 00:15:59.123 "uuid": "bf58c4f3-9e17-59f5-ad9a-5b0e9e0478df", 00:15:59.123 "is_configured": true, 00:15:59.123 "data_offset": 2048, 00:15:59.123 "data_size": 63488 00:15:59.123 }, 00:15:59.123 { 00:15:59.123 "name": "pt3", 00:15:59.123 "uuid": "320caa14-ec5d-5948-88a7-827cf5a0b2fe", 00:15:59.123 "is_configured": true, 00:15:59.123 "data_offset": 2048, 00:15:59.123 "data_size": 63488 00:15:59.123 } 00:15:59.123 ] 00:15:59.123 }' 00:15:59.123 16:33:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.123 16:33:30 -- common/autotest_common.sh@10 -- # set +x 00:15:59.691 16:33:31 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:59.691 16:33:31 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:59.950 [2024-07-13 16:33:31.206186] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.950 16:33:31 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=fca83c98-3a16-4bc6-b76b-05a2c754c73e 00:15:59.950 16:33:31 -- bdev/bdev_raid.sh@380 -- # '[' -z fca83c98-3a16-4bc6-b76b-05a2c754c73e ']' 00:15:59.950 16:33:31 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:00.209 [2024-07-13 16:33:31.450033] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.209 [2024-07-13 16:33:31.450299] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.209 [2024-07-13 16:33:31.450551] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.210 [2024-07-13 16:33:31.450710] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.210 [2024-07-13 16:33:31.450807] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:16:00.210 16:33:31 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.210 16:33:31 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:00.469 16:33:31 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:00.469 16:33:31 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:00.469 16:33:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:00.469 16:33:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:00.469 16:33:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:00.469 16:33:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:00.736 16:33:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:00.736 16:33:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:00.999 16:33:32 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:00.999 16:33:32 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:01.257 16:33:32 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:01.257 16:33:32 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:01.257 16:33:32 -- common/autotest_common.sh@640 -- # local es=0 00:16:01.257 16:33:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:01.257 16:33:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.257 16:33:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:01.258 16:33:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.258 16:33:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:01.258 16:33:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.258 16:33:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:01.258 16:33:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.258 16:33:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:01.258 16:33:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:01.515 [2024-07-13 16:33:32.758314] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:01.515 [2024-07-13 16:33:32.761260] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:01.515 [2024-07-13 16:33:32.761478] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:01.515 [2024-07-13 16:33:32.761568] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:01.515 [2024-07-13 16:33:32.761790] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:01.515 [2024-07-13 16:33:32.761860] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:01.516 [2024-07-13 16:33:32.762059] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.516 [2024-07-13 16:33:32.762148] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:16:01.516 request: 00:16:01.516 { 00:16:01.516 "name": "raid_bdev1", 00:16:01.516 "raid_level": "raid0", 00:16:01.516 "base_bdevs": [ 00:16:01.516 "malloc1", 00:16:01.516 "malloc2", 00:16:01.516 "malloc3" 00:16:01.516 ], 00:16:01.516 "superblock": false, 00:16:01.516 "strip_size_kb": 64, 00:16:01.516 "method": "bdev_raid_create", 00:16:01.516 "req_id": 1 00:16:01.516 } 00:16:01.516 Got JSON-RPC error response 00:16:01.516 response: 00:16:01.516 { 00:16:01.516 "code": -17, 00:16:01.516 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:01.516 } 00:16:01.516 16:33:32 -- common/autotest_common.sh@643 -- # es=1 00:16:01.516 16:33:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:01.516 16:33:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:01.516 16:33:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:01.516 16:33:32 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.516 16:33:32 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:01.775 16:33:33 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:01.775 16:33:33 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:01.775 16:33:33 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:01.775 [2024-07-13 16:33:33.226586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:01.775 [2024-07-13 16:33:33.226949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.775 [2024-07-13 16:33:33.227027] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:01.775 [2024-07-13 16:33:33.227121] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.775 [2024-07-13 16:33:33.230127] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.775 [2024-07-13 16:33:33.230316] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:01.775 [2024-07-13 16:33:33.230521] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:01.776 [2024-07-13 16:33:33.230694] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:01.776 pt1 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.035 "name": "raid_bdev1", 00:16:02.035 "uuid": "fca83c98-3a16-4bc6-b76b-05a2c754c73e", 00:16:02.035 "strip_size_kb": 64, 00:16:02.035 "state": "configuring", 00:16:02.035 "raid_level": "raid0", 00:16:02.035 "superblock": true, 00:16:02.035 "num_base_bdevs": 3, 00:16:02.035 "num_base_bdevs_discovered": 1, 00:16:02.035 "num_base_bdevs_operational": 3, 00:16:02.035 "base_bdevs_list": [ 00:16:02.035 { 00:16:02.035 "name": "pt1", 00:16:02.035 "uuid": "c5d05ec5-524d-5a28-9a19-fdc41e3b4860", 00:16:02.035 "is_configured": true, 00:16:02.035 "data_offset": 2048, 00:16:02.035 "data_size": 63488 00:16:02.035 }, 00:16:02.035 { 00:16:02.035 "name": null, 00:16:02.035 "uuid": "bf58c4f3-9e17-59f5-ad9a-5b0e9e0478df", 00:16:02.035 "is_configured": false, 00:16:02.035 "data_offset": 2048, 00:16:02.035 "data_size": 63488 00:16:02.035 }, 00:16:02.035 { 00:16:02.035 "name": null, 00:16:02.035 "uuid": "320caa14-ec5d-5948-88a7-827cf5a0b2fe", 00:16:02.035 "is_configured": false, 00:16:02.035 "data_offset": 2048, 00:16:02.035 "data_size": 63488 00:16:02.035 } 00:16:02.035 ] 00:16:02.035 }' 00:16:02.035 16:33:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.035 16:33:33 -- common/autotest_common.sh@10 -- # set +x 00:16:02.603 16:33:34 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:02.603 16:33:34 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:02.862 [2024-07-13 16:33:34.318859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:02.862 [2024-07-13 16:33:34.319250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.862 [2024-07-13 16:33:34.319345] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:02.862 [2024-07-13 16:33:34.319463] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.862 [2024-07-13 16:33:34.320045] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.862 [2024-07-13 16:33:34.320191] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:02.862 [2024-07-13 16:33:34.320404] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:02.862 [2024-07-13 16:33:34.320500] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.862 pt2 00:16:03.120 16:33:34 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:03.120 [2024-07-13 16:33:34.582935] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:03.379 16:33:34 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:03.379 16:33:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:03.379 16:33:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:03.379 16:33:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:03.379 16:33:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:03.379 16:33:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:03.379 16:33:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.379 16:33:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.379 16:33:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.379 16:33:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.379 16:33:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.379 16:33:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.637 16:33:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.637 "name": "raid_bdev1", 00:16:03.637 "uuid": "fca83c98-3a16-4bc6-b76b-05a2c754c73e", 00:16:03.637 "strip_size_kb": 64, 00:16:03.637 "state": "configuring", 00:16:03.637 "raid_level": "raid0", 00:16:03.637 "superblock": true, 00:16:03.637 "num_base_bdevs": 3, 00:16:03.637 "num_base_bdevs_discovered": 1, 00:16:03.637 "num_base_bdevs_operational": 3, 00:16:03.637 "base_bdevs_list": [ 00:16:03.637 { 00:16:03.637 "name": "pt1", 00:16:03.637 "uuid": "c5d05ec5-524d-5a28-9a19-fdc41e3b4860", 00:16:03.637 "is_configured": true, 00:16:03.637 "data_offset": 2048, 00:16:03.637 "data_size": 63488 00:16:03.637 }, 00:16:03.637 { 00:16:03.637 "name": null, 00:16:03.637 "uuid": "bf58c4f3-9e17-59f5-ad9a-5b0e9e0478df", 00:16:03.637 "is_configured": false, 00:16:03.637 "data_offset": 2048, 00:16:03.637 "data_size": 63488 00:16:03.637 }, 00:16:03.637 { 00:16:03.637 "name": null, 00:16:03.637 "uuid": "320caa14-ec5d-5948-88a7-827cf5a0b2fe", 00:16:03.637 "is_configured": false, 00:16:03.637 "data_offset": 2048, 00:16:03.637 "data_size": 63488 00:16:03.638 } 00:16:03.638 ] 00:16:03.638 }' 00:16:03.638 16:33:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.638 16:33:34 -- common/autotest_common.sh@10 -- # set +x 00:16:04.204 16:33:35 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:04.204 16:33:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:04.204 16:33:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.204 [2024-07-13 16:33:35.651060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.204 [2024-07-13 16:33:35.651444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.204 [2024-07-13 16:33:35.651520] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:04.204 [2024-07-13 16:33:35.651619] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.204 [2024-07-13 16:33:35.652154] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.204 [2024-07-13 16:33:35.652315] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.204 [2024-07-13 16:33:35.652508] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:04.204 [2024-07-13 16:33:35.652604] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.204 pt2 00:16:04.204 16:33:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:04.204 16:33:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:04.204 16:33:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:04.463 [2024-07-13 16:33:35.843179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:04.463 [2024-07-13 16:33:35.843511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.463 [2024-07-13 16:33:35.843589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:04.463 [2024-07-13 16:33:35.843702] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.463 [2024-07-13 16:33:35.844244] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.463 [2024-07-13 16:33:35.844402] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:04.463 [2024-07-13 16:33:35.844599] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:04.463 [2024-07-13 16:33:35.844694] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:04.463 [2024-07-13 16:33:35.844849] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:04.463 [2024-07-13 16:33:35.844941] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:04.463 [2024-07-13 16:33:35.845062] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:16:04.463 [2024-07-13 16:33:35.845414] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:04.463 [2024-07-13 16:33:35.845510] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:04.463 [2024-07-13 16:33:35.845680] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.463 pt3 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.463 16:33:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.722 16:33:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.722 "name": "raid_bdev1", 00:16:04.722 "uuid": "fca83c98-3a16-4bc6-b76b-05a2c754c73e", 00:16:04.722 "strip_size_kb": 64, 00:16:04.722 "state": "online", 00:16:04.722 "raid_level": "raid0", 00:16:04.722 "superblock": true, 00:16:04.722 "num_base_bdevs": 3, 00:16:04.722 "num_base_bdevs_discovered": 3, 00:16:04.722 "num_base_bdevs_operational": 3, 00:16:04.722 "base_bdevs_list": [ 00:16:04.722 { 00:16:04.722 "name": "pt1", 00:16:04.722 "uuid": "c5d05ec5-524d-5a28-9a19-fdc41e3b4860", 00:16:04.722 "is_configured": true, 00:16:04.722 "data_offset": 2048, 00:16:04.722 "data_size": 63488 00:16:04.722 }, 00:16:04.722 { 00:16:04.722 "name": "pt2", 00:16:04.722 "uuid": "bf58c4f3-9e17-59f5-ad9a-5b0e9e0478df", 00:16:04.722 "is_configured": true, 00:16:04.722 "data_offset": 2048, 00:16:04.722 "data_size": 63488 00:16:04.722 }, 00:16:04.722 { 00:16:04.722 "name": "pt3", 00:16:04.722 "uuid": "320caa14-ec5d-5948-88a7-827cf5a0b2fe", 00:16:04.722 "is_configured": true, 00:16:04.722 "data_offset": 2048, 00:16:04.722 "data_size": 63488 00:16:04.722 } 00:16:04.722 ] 00:16:04.722 }' 00:16:04.722 16:33:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.722 16:33:36 -- common/autotest_common.sh@10 -- # set +x 00:16:05.287 16:33:36 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:05.287 16:33:36 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:05.545 [2024-07-13 16:33:36.891537] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.545 16:33:36 -- bdev/bdev_raid.sh@430 -- # '[' fca83c98-3a16-4bc6-b76b-05a2c754c73e '!=' fca83c98-3a16-4bc6-b76b-05a2c754c73e ']' 00:16:05.545 16:33:36 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:16:05.545 16:33:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:05.545 16:33:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:05.545 16:33:36 -- bdev/bdev_raid.sh@511 -- # killprocess 126315 00:16:05.545 16:33:36 -- common/autotest_common.sh@926 -- # '[' -z 126315 ']' 00:16:05.545 16:33:36 -- common/autotest_common.sh@930 -- # kill -0 126315 00:16:05.546 16:33:36 -- common/autotest_common.sh@931 -- # uname 00:16:05.546 16:33:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.546 16:33:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126315 00:16:05.546 killing process with pid 126315 00:16:05.546 16:33:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:05.546 16:33:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:05.546 16:33:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126315' 00:16:05.546 16:33:36 -- common/autotest_common.sh@945 -- # kill 126315 00:16:05.546 16:33:36 -- common/autotest_common.sh@950 -- # wait 126315 00:16:05.546 [2024-07-13 16:33:36.947861] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.546 [2024-07-13 16:33:36.947972] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.546 [2024-07-13 16:33:36.948043] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.546 [2024-07-13 16:33:36.948053] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:05.546 [2024-07-13 16:33:37.014987] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.110 16:33:37 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:06.110 00:16:06.110 real 0m9.835s 00:16:06.110 user 0m16.893s 00:16:06.110 sys 0m1.976s 00:16:06.110 16:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.110 16:33:37 -- common/autotest_common.sh@10 -- # set +x 00:16:06.110 ************************************ 00:16:06.110 END TEST raid_superblock_test 00:16:06.110 ************************************ 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:16:06.111 16:33:37 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:06.111 16:33:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:06.111 16:33:37 -- common/autotest_common.sh@10 -- # set +x 00:16:06.111 ************************************ 00:16:06.111 START TEST raid_state_function_test 00:16:06.111 ************************************ 00:16:06.111 16:33:37 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@226 -- # raid_pid=126613 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126613' 00:16:06.111 Process raid pid: 126613 00:16:06.111 16:33:37 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126613 /var/tmp/spdk-raid.sock 00:16:06.111 16:33:37 -- common/autotest_common.sh@819 -- # '[' -z 126613 ']' 00:16:06.111 16:33:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:06.111 16:33:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:06.111 16:33:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:06.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:06.111 16:33:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:06.111 16:33:37 -- common/autotest_common.sh@10 -- # set +x 00:16:06.111 [2024-07-13 16:33:37.574814] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:06.111 [2024-07-13 16:33:37.575405] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.369 [2024-07-13 16:33:37.730039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.369 [2024-07-13 16:33:37.817213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.628 [2024-07-13 16:33:37.898881] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.195 16:33:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:07.195 16:33:38 -- common/autotest_common.sh@852 -- # return 0 00:16:07.195 16:33:38 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:07.195 [2024-07-13 16:33:38.621180] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:07.195 [2024-07-13 16:33:38.621577] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:07.195 [2024-07-13 16:33:38.621669] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.195 [2024-07-13 16:33:38.621724] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.195 [2024-07-13 16:33:38.621750] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:07.195 [2024-07-13 16:33:38.621828] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:07.195 16:33:38 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:07.195 16:33:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:07.195 16:33:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:07.195 16:33:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:07.195 16:33:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:07.195 16:33:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:07.195 16:33:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:07.195 16:33:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:07.195 16:33:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:07.195 16:33:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:07.195 16:33:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.195 16:33:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.452 16:33:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:07.452 "name": "Existed_Raid", 00:16:07.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.452 "strip_size_kb": 64, 00:16:07.452 "state": "configuring", 00:16:07.452 "raid_level": "concat", 00:16:07.452 "superblock": false, 00:16:07.452 "num_base_bdevs": 3, 00:16:07.452 "num_base_bdevs_discovered": 0, 00:16:07.452 "num_base_bdevs_operational": 3, 00:16:07.452 "base_bdevs_list": [ 00:16:07.452 { 00:16:07.452 "name": "BaseBdev1", 00:16:07.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.452 "is_configured": false, 00:16:07.452 "data_offset": 0, 00:16:07.452 "data_size": 0 00:16:07.452 }, 00:16:07.452 { 00:16:07.452 "name": "BaseBdev2", 00:16:07.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.452 "is_configured": false, 00:16:07.452 "data_offset": 0, 00:16:07.452 "data_size": 0 00:16:07.452 }, 00:16:07.452 { 00:16:07.452 "name": "BaseBdev3", 00:16:07.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.452 "is_configured": false, 00:16:07.452 "data_offset": 0, 00:16:07.452 "data_size": 0 00:16:07.452 } 00:16:07.452 ] 00:16:07.452 }' 00:16:07.452 16:33:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:07.452 16:33:38 -- common/autotest_common.sh@10 -- # set +x 00:16:08.387 16:33:39 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:08.387 [2024-07-13 16:33:39.801225] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:08.387 [2024-07-13 16:33:39.801290] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:08.387 16:33:39 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:08.646 [2024-07-13 16:33:40.073334] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.646 [2024-07-13 16:33:40.073424] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.646 [2024-07-13 16:33:40.073434] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.646 [2024-07-13 16:33:40.073461] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.646 [2024-07-13 16:33:40.073467] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:08.646 [2024-07-13 16:33:40.073494] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:08.646 16:33:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.905 [2024-07-13 16:33:40.333588] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.905 BaseBdev1 00:16:08.905 16:33:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:08.905 16:33:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:08.905 16:33:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:08.905 16:33:40 -- common/autotest_common.sh@889 -- # local i 00:16:08.905 16:33:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:08.905 16:33:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:08.905 16:33:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:09.164 16:33:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.423 [ 00:16:09.423 { 00:16:09.423 "name": "BaseBdev1", 00:16:09.423 "aliases": [ 00:16:09.423 "74216e34-b0b0-4945-af22-09d5a2a03629" 00:16:09.423 ], 00:16:09.423 "product_name": "Malloc disk", 00:16:09.423 "block_size": 512, 00:16:09.423 "num_blocks": 65536, 00:16:09.423 "uuid": "74216e34-b0b0-4945-af22-09d5a2a03629", 00:16:09.423 "assigned_rate_limits": { 00:16:09.423 "rw_ios_per_sec": 0, 00:16:09.423 "rw_mbytes_per_sec": 0, 00:16:09.423 "r_mbytes_per_sec": 0, 00:16:09.423 "w_mbytes_per_sec": 0 00:16:09.423 }, 00:16:09.423 "claimed": true, 00:16:09.423 "claim_type": "exclusive_write", 00:16:09.423 "zoned": false, 00:16:09.423 "supported_io_types": { 00:16:09.423 "read": true, 00:16:09.423 "write": true, 00:16:09.423 "unmap": true, 00:16:09.423 "write_zeroes": true, 00:16:09.423 "flush": true, 00:16:09.423 "reset": true, 00:16:09.423 "compare": false, 00:16:09.423 "compare_and_write": false, 00:16:09.423 "abort": true, 00:16:09.423 "nvme_admin": false, 00:16:09.423 "nvme_io": false 00:16:09.423 }, 00:16:09.423 "memory_domains": [ 00:16:09.423 { 00:16:09.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.423 "dma_device_type": 2 00:16:09.423 } 00:16:09.423 ], 00:16:09.423 "driver_specific": {} 00:16:09.423 } 00:16:09.423 ] 00:16:09.423 16:33:40 -- common/autotest_common.sh@895 -- # return 0 00:16:09.423 16:33:40 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:09.423 16:33:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.423 16:33:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:09.423 16:33:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:09.423 16:33:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:09.423 16:33:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:09.423 16:33:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.423 16:33:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.423 16:33:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.423 16:33:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.423 16:33:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.423 16:33:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.682 16:33:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.682 "name": "Existed_Raid", 00:16:09.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.682 "strip_size_kb": 64, 00:16:09.682 "state": "configuring", 00:16:09.682 "raid_level": "concat", 00:16:09.682 "superblock": false, 00:16:09.682 "num_base_bdevs": 3, 00:16:09.682 "num_base_bdevs_discovered": 1, 00:16:09.682 "num_base_bdevs_operational": 3, 00:16:09.682 "base_bdevs_list": [ 00:16:09.682 { 00:16:09.682 "name": "BaseBdev1", 00:16:09.682 "uuid": "74216e34-b0b0-4945-af22-09d5a2a03629", 00:16:09.682 "is_configured": true, 00:16:09.682 "data_offset": 0, 00:16:09.682 "data_size": 65536 00:16:09.682 }, 00:16:09.682 { 00:16:09.682 "name": "BaseBdev2", 00:16:09.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.682 "is_configured": false, 00:16:09.682 "data_offset": 0, 00:16:09.682 "data_size": 0 00:16:09.682 }, 00:16:09.682 { 00:16:09.682 "name": "BaseBdev3", 00:16:09.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.682 "is_configured": false, 00:16:09.682 "data_offset": 0, 00:16:09.682 "data_size": 0 00:16:09.682 } 00:16:09.682 ] 00:16:09.682 }' 00:16:09.682 16:33:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.682 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:16:10.251 16:33:41 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:10.510 [2024-07-13 16:33:41.761872] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:10.510 [2024-07-13 16:33:41.761950] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:10.510 16:33:41 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:10.510 16:33:41 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:10.768 [2024-07-13 16:33:42.034093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.768 [2024-07-13 16:33:42.036715] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:10.768 [2024-07-13 16:33:42.036791] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:10.768 [2024-07-13 16:33:42.036800] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:10.768 [2024-07-13 16:33:42.036826] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.768 16:33:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.027 16:33:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:11.027 "name": "Existed_Raid", 00:16:11.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.027 "strip_size_kb": 64, 00:16:11.027 "state": "configuring", 00:16:11.027 "raid_level": "concat", 00:16:11.027 "superblock": false, 00:16:11.027 "num_base_bdevs": 3, 00:16:11.027 "num_base_bdevs_discovered": 1, 00:16:11.027 "num_base_bdevs_operational": 3, 00:16:11.027 "base_bdevs_list": [ 00:16:11.027 { 00:16:11.027 "name": "BaseBdev1", 00:16:11.027 "uuid": "74216e34-b0b0-4945-af22-09d5a2a03629", 00:16:11.027 "is_configured": true, 00:16:11.027 "data_offset": 0, 00:16:11.027 "data_size": 65536 00:16:11.027 }, 00:16:11.027 { 00:16:11.027 "name": "BaseBdev2", 00:16:11.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.027 "is_configured": false, 00:16:11.027 "data_offset": 0, 00:16:11.027 "data_size": 0 00:16:11.027 }, 00:16:11.027 { 00:16:11.027 "name": "BaseBdev3", 00:16:11.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.027 "is_configured": false, 00:16:11.027 "data_offset": 0, 00:16:11.027 "data_size": 0 00:16:11.027 } 00:16:11.027 ] 00:16:11.027 }' 00:16:11.027 16:33:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:11.027 16:33:42 -- common/autotest_common.sh@10 -- # set +x 00:16:11.594 16:33:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:11.594 [2024-07-13 16:33:43.014464] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.594 BaseBdev2 00:16:11.594 16:33:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:11.594 16:33:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:11.594 16:33:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:11.594 16:33:43 -- common/autotest_common.sh@889 -- # local i 00:16:11.594 16:33:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:11.594 16:33:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:11.594 16:33:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.851 16:33:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:12.110 [ 00:16:12.110 { 00:16:12.110 "name": "BaseBdev2", 00:16:12.110 "aliases": [ 00:16:12.110 "29be1ff1-d8bf-4e43-8966-8cc038cbbe64" 00:16:12.110 ], 00:16:12.110 "product_name": "Malloc disk", 00:16:12.110 "block_size": 512, 00:16:12.110 "num_blocks": 65536, 00:16:12.110 "uuid": "29be1ff1-d8bf-4e43-8966-8cc038cbbe64", 00:16:12.110 "assigned_rate_limits": { 00:16:12.110 "rw_ios_per_sec": 0, 00:16:12.110 "rw_mbytes_per_sec": 0, 00:16:12.110 "r_mbytes_per_sec": 0, 00:16:12.110 "w_mbytes_per_sec": 0 00:16:12.110 }, 00:16:12.110 "claimed": true, 00:16:12.110 "claim_type": "exclusive_write", 00:16:12.110 "zoned": false, 00:16:12.110 "supported_io_types": { 00:16:12.110 "read": true, 00:16:12.110 "write": true, 00:16:12.110 "unmap": true, 00:16:12.110 "write_zeroes": true, 00:16:12.110 "flush": true, 00:16:12.110 "reset": true, 00:16:12.110 "compare": false, 00:16:12.110 "compare_and_write": false, 00:16:12.110 "abort": true, 00:16:12.110 "nvme_admin": false, 00:16:12.110 "nvme_io": false 00:16:12.110 }, 00:16:12.110 "memory_domains": [ 00:16:12.110 { 00:16:12.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.110 "dma_device_type": 2 00:16:12.110 } 00:16:12.110 ], 00:16:12.110 "driver_specific": {} 00:16:12.110 } 00:16:12.110 ] 00:16:12.110 16:33:43 -- common/autotest_common.sh@895 -- # return 0 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.110 16:33:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.376 16:33:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:12.376 "name": "Existed_Raid", 00:16:12.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.376 "strip_size_kb": 64, 00:16:12.376 "state": "configuring", 00:16:12.376 "raid_level": "concat", 00:16:12.376 "superblock": false, 00:16:12.376 "num_base_bdevs": 3, 00:16:12.376 "num_base_bdevs_discovered": 2, 00:16:12.376 "num_base_bdevs_operational": 3, 00:16:12.376 "base_bdevs_list": [ 00:16:12.376 { 00:16:12.376 "name": "BaseBdev1", 00:16:12.376 "uuid": "74216e34-b0b0-4945-af22-09d5a2a03629", 00:16:12.376 "is_configured": true, 00:16:12.376 "data_offset": 0, 00:16:12.376 "data_size": 65536 00:16:12.376 }, 00:16:12.376 { 00:16:12.376 "name": "BaseBdev2", 00:16:12.376 "uuid": "29be1ff1-d8bf-4e43-8966-8cc038cbbe64", 00:16:12.376 "is_configured": true, 00:16:12.376 "data_offset": 0, 00:16:12.376 "data_size": 65536 00:16:12.376 }, 00:16:12.376 { 00:16:12.376 "name": "BaseBdev3", 00:16:12.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.376 "is_configured": false, 00:16:12.376 "data_offset": 0, 00:16:12.376 "data_size": 0 00:16:12.376 } 00:16:12.376 ] 00:16:12.376 }' 00:16:12.376 16:33:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:12.376 16:33:43 -- common/autotest_common.sh@10 -- # set +x 00:16:12.957 16:33:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:13.216 [2024-07-13 16:33:44.552697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:13.216 [2024-07-13 16:33:44.552766] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:13.216 [2024-07-13 16:33:44.552775] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:13.216 [2024-07-13 16:33:44.552925] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:16:13.216 [2024-07-13 16:33:44.553332] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:13.216 [2024-07-13 16:33:44.553343] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:16:13.216 [2024-07-13 16:33:44.553603] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.216 BaseBdev3 00:16:13.216 16:33:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:13.216 16:33:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:13.216 16:33:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:13.216 16:33:44 -- common/autotest_common.sh@889 -- # local i 00:16:13.216 16:33:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:13.216 16:33:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:13.216 16:33:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:13.474 16:33:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:13.733 [ 00:16:13.733 { 00:16:13.733 "name": "BaseBdev3", 00:16:13.733 "aliases": [ 00:16:13.733 "988eb473-c266-4058-8bd4-d5dfcf028c6b" 00:16:13.733 ], 00:16:13.733 "product_name": "Malloc disk", 00:16:13.733 "block_size": 512, 00:16:13.733 "num_blocks": 65536, 00:16:13.733 "uuid": "988eb473-c266-4058-8bd4-d5dfcf028c6b", 00:16:13.733 "assigned_rate_limits": { 00:16:13.733 "rw_ios_per_sec": 0, 00:16:13.733 "rw_mbytes_per_sec": 0, 00:16:13.733 "r_mbytes_per_sec": 0, 00:16:13.733 "w_mbytes_per_sec": 0 00:16:13.733 }, 00:16:13.733 "claimed": true, 00:16:13.733 "claim_type": "exclusive_write", 00:16:13.733 "zoned": false, 00:16:13.733 "supported_io_types": { 00:16:13.733 "read": true, 00:16:13.733 "write": true, 00:16:13.733 "unmap": true, 00:16:13.733 "write_zeroes": true, 00:16:13.733 "flush": true, 00:16:13.733 "reset": true, 00:16:13.733 "compare": false, 00:16:13.733 "compare_and_write": false, 00:16:13.733 "abort": true, 00:16:13.733 "nvme_admin": false, 00:16:13.733 "nvme_io": false 00:16:13.733 }, 00:16:13.733 "memory_domains": [ 00:16:13.733 { 00:16:13.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.733 "dma_device_type": 2 00:16:13.733 } 00:16:13.733 ], 00:16:13.733 "driver_specific": {} 00:16:13.733 } 00:16:13.733 ] 00:16:13.733 16:33:45 -- common/autotest_common.sh@895 -- # return 0 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.733 16:33:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.992 16:33:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.992 "name": "Existed_Raid", 00:16:13.992 "uuid": "5f44f40a-d694-42ea-9723-0265aa7f50b6", 00:16:13.992 "strip_size_kb": 64, 00:16:13.992 "state": "online", 00:16:13.992 "raid_level": "concat", 00:16:13.992 "superblock": false, 00:16:13.992 "num_base_bdevs": 3, 00:16:13.992 "num_base_bdevs_discovered": 3, 00:16:13.992 "num_base_bdevs_operational": 3, 00:16:13.992 "base_bdevs_list": [ 00:16:13.992 { 00:16:13.992 "name": "BaseBdev1", 00:16:13.992 "uuid": "74216e34-b0b0-4945-af22-09d5a2a03629", 00:16:13.992 "is_configured": true, 00:16:13.992 "data_offset": 0, 00:16:13.992 "data_size": 65536 00:16:13.992 }, 00:16:13.992 { 00:16:13.992 "name": "BaseBdev2", 00:16:13.992 "uuid": "29be1ff1-d8bf-4e43-8966-8cc038cbbe64", 00:16:13.992 "is_configured": true, 00:16:13.992 "data_offset": 0, 00:16:13.992 "data_size": 65536 00:16:13.992 }, 00:16:13.992 { 00:16:13.992 "name": "BaseBdev3", 00:16:13.992 "uuid": "988eb473-c266-4058-8bd4-d5dfcf028c6b", 00:16:13.992 "is_configured": true, 00:16:13.992 "data_offset": 0, 00:16:13.992 "data_size": 65536 00:16:13.992 } 00:16:13.992 ] 00:16:13.992 }' 00:16:13.992 16:33:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.992 16:33:45 -- common/autotest_common.sh@10 -- # set +x 00:16:14.674 16:33:45 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:14.674 [2024-07-13 16:33:46.113168] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:14.674 [2024-07-13 16:33:46.113223] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.674 [2024-07-13 16:33:46.113339] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.932 16:33:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.932 "name": "Existed_Raid", 00:16:14.932 "uuid": "5f44f40a-d694-42ea-9723-0265aa7f50b6", 00:16:14.932 "strip_size_kb": 64, 00:16:14.932 "state": "offline", 00:16:14.932 "raid_level": "concat", 00:16:14.932 "superblock": false, 00:16:14.932 "num_base_bdevs": 3, 00:16:14.932 "num_base_bdevs_discovered": 2, 00:16:14.932 "num_base_bdevs_operational": 2, 00:16:14.932 "base_bdevs_list": [ 00:16:14.932 { 00:16:14.932 "name": null, 00:16:14.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.932 "is_configured": false, 00:16:14.932 "data_offset": 0, 00:16:14.933 "data_size": 65536 00:16:14.933 }, 00:16:14.933 { 00:16:14.933 "name": "BaseBdev2", 00:16:14.933 "uuid": "29be1ff1-d8bf-4e43-8966-8cc038cbbe64", 00:16:14.933 "is_configured": true, 00:16:14.933 "data_offset": 0, 00:16:14.933 "data_size": 65536 00:16:14.933 }, 00:16:14.933 { 00:16:14.933 "name": "BaseBdev3", 00:16:14.933 "uuid": "988eb473-c266-4058-8bd4-d5dfcf028c6b", 00:16:14.933 "is_configured": true, 00:16:14.933 "data_offset": 0, 00:16:14.933 "data_size": 65536 00:16:14.933 } 00:16:14.933 ] 00:16:14.933 }' 00:16:14.933 16:33:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.933 16:33:46 -- common/autotest_common.sh@10 -- # set +x 00:16:15.866 16:33:46 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:15.866 16:33:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:15.866 16:33:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.866 16:33:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:15.866 16:33:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:15.866 16:33:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.866 16:33:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:16.125 [2024-07-13 16:33:47.437097] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:16.125 16:33:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:16.125 16:33:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:16.125 16:33:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:16.125 16:33:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.383 16:33:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:16.383 16:33:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:16.383 16:33:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:16.641 [2024-07-13 16:33:47.970735] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:16.641 [2024-07-13 16:33:47.970812] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:16:16.641 16:33:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:16.641 16:33:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:16.641 16:33:48 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.641 16:33:48 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:16.900 16:33:48 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:16.900 16:33:48 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:16.900 16:33:48 -- bdev/bdev_raid.sh@287 -- # killprocess 126613 00:16:16.900 16:33:48 -- common/autotest_common.sh@926 -- # '[' -z 126613 ']' 00:16:16.900 16:33:48 -- common/autotest_common.sh@930 -- # kill -0 126613 00:16:16.900 16:33:48 -- common/autotest_common.sh@931 -- # uname 00:16:16.900 16:33:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:16.900 16:33:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126613 00:16:16.900 16:33:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:16.900 16:33:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:16.900 16:33:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126613' 00:16:16.900 killing process with pid 126613 00:16:16.900 16:33:48 -- common/autotest_common.sh@945 -- # kill 126613 00:16:16.900 16:33:48 -- common/autotest_common.sh@950 -- # wait 126613 00:16:16.900 [2024-07-13 16:33:48.282681] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.900 [2024-07-13 16:33:48.282778] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.468 ************************************ 00:16:17.468 END TEST raid_state_function_test 00:16:17.468 ************************************ 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:17.468 00:16:17.468 real 0m11.202s 00:16:17.468 user 0m19.611s 00:16:17.468 sys 0m2.095s 00:16:17.468 16:33:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.468 16:33:48 -- common/autotest_common.sh@10 -- # set +x 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:16:17.468 16:33:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:17.468 16:33:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:17.468 16:33:48 -- common/autotest_common.sh@10 -- # set +x 00:16:17.468 ************************************ 00:16:17.468 START TEST raid_state_function_test_sb 00:16:17.468 ************************************ 00:16:17.468 16:33:48 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@226 -- # raid_pid=126984 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126984' 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:17.468 Process raid pid: 126984 00:16:17.468 16:33:48 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126984 /var/tmp/spdk-raid.sock 00:16:17.468 16:33:48 -- common/autotest_common.sh@819 -- # '[' -z 126984 ']' 00:16:17.468 16:33:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:17.468 16:33:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:17.468 16:33:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:17.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:17.468 16:33:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:17.468 16:33:48 -- common/autotest_common.sh@10 -- # set +x 00:16:17.468 [2024-07-13 16:33:48.848098] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:17.468 [2024-07-13 16:33:48.848390] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.727 [2024-07-13 16:33:48.999960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.727 [2024-07-13 16:33:49.079597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.727 [2024-07-13 16:33:49.159364] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.293 16:33:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:18.293 16:33:49 -- common/autotest_common.sh@852 -- # return 0 00:16:18.293 16:33:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:18.551 [2024-07-13 16:33:49.996519] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.551 [2024-07-13 16:33:49.996626] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.551 [2024-07-13 16:33:49.996640] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.551 [2024-07-13 16:33:49.996661] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.551 [2024-07-13 16:33:49.996668] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.551 [2024-07-13 16:33:49.996722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.811 16:33:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:18.811 16:33:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:18.811 16:33:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:18.811 16:33:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:18.811 16:33:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:18.811 16:33:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:18.811 16:33:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.811 16:33:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.811 16:33:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.811 16:33:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.811 16:33:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.811 16:33:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.070 16:33:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.070 "name": "Existed_Raid", 00:16:19.070 "uuid": "5e7ffd09-f9c1-47f2-abca-01028a2a9d18", 00:16:19.070 "strip_size_kb": 64, 00:16:19.070 "state": "configuring", 00:16:19.070 "raid_level": "concat", 00:16:19.070 "superblock": true, 00:16:19.070 "num_base_bdevs": 3, 00:16:19.070 "num_base_bdevs_discovered": 0, 00:16:19.070 "num_base_bdevs_operational": 3, 00:16:19.070 "base_bdevs_list": [ 00:16:19.070 { 00:16:19.070 "name": "BaseBdev1", 00:16:19.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.070 "is_configured": false, 00:16:19.070 "data_offset": 0, 00:16:19.070 "data_size": 0 00:16:19.070 }, 00:16:19.070 { 00:16:19.070 "name": "BaseBdev2", 00:16:19.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.070 "is_configured": false, 00:16:19.070 "data_offset": 0, 00:16:19.070 "data_size": 0 00:16:19.070 }, 00:16:19.070 { 00:16:19.070 "name": "BaseBdev3", 00:16:19.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.070 "is_configured": false, 00:16:19.070 "data_offset": 0, 00:16:19.070 "data_size": 0 00:16:19.070 } 00:16:19.070 ] 00:16:19.070 }' 00:16:19.070 16:33:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.070 16:33:50 -- common/autotest_common.sh@10 -- # set +x 00:16:19.638 16:33:50 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:19.897 [2024-07-13 16:33:51.128532] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:19.897 [2024-07-13 16:33:51.128591] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:19.897 16:33:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:20.154 [2024-07-13 16:33:51.396646] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:20.154 [2024-07-13 16:33:51.396728] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:20.154 [2024-07-13 16:33:51.396739] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.154 [2024-07-13 16:33:51.396763] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.154 [2024-07-13 16:33:51.396770] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:20.154 [2024-07-13 16:33:51.396797] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:20.154 16:33:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:20.412 [2024-07-13 16:33:51.676742] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.412 BaseBdev1 00:16:20.412 16:33:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:20.412 16:33:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:20.412 16:33:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:20.412 16:33:51 -- common/autotest_common.sh@889 -- # local i 00:16:20.412 16:33:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:20.412 16:33:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:20.412 16:33:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:20.670 16:33:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:20.670 [ 00:16:20.670 { 00:16:20.670 "name": "BaseBdev1", 00:16:20.670 "aliases": [ 00:16:20.670 "168e64a6-fdaf-492e-bb8c-a9eb790a8867" 00:16:20.670 ], 00:16:20.670 "product_name": "Malloc disk", 00:16:20.670 "block_size": 512, 00:16:20.670 "num_blocks": 65536, 00:16:20.670 "uuid": "168e64a6-fdaf-492e-bb8c-a9eb790a8867", 00:16:20.670 "assigned_rate_limits": { 00:16:20.670 "rw_ios_per_sec": 0, 00:16:20.670 "rw_mbytes_per_sec": 0, 00:16:20.670 "r_mbytes_per_sec": 0, 00:16:20.670 "w_mbytes_per_sec": 0 00:16:20.670 }, 00:16:20.670 "claimed": true, 00:16:20.670 "claim_type": "exclusive_write", 00:16:20.670 "zoned": false, 00:16:20.670 "supported_io_types": { 00:16:20.670 "read": true, 00:16:20.670 "write": true, 00:16:20.670 "unmap": true, 00:16:20.670 "write_zeroes": true, 00:16:20.670 "flush": true, 00:16:20.670 "reset": true, 00:16:20.670 "compare": false, 00:16:20.670 "compare_and_write": false, 00:16:20.670 "abort": true, 00:16:20.670 "nvme_admin": false, 00:16:20.670 "nvme_io": false 00:16:20.670 }, 00:16:20.670 "memory_domains": [ 00:16:20.670 { 00:16:20.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.670 "dma_device_type": 2 00:16:20.670 } 00:16:20.670 ], 00:16:20.670 "driver_specific": {} 00:16:20.670 } 00:16:20.670 ] 00:16:20.670 16:33:52 -- common/autotest_common.sh@895 -- # return 0 00:16:20.670 16:33:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:20.670 16:33:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:20.670 16:33:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:20.670 16:33:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:20.670 16:33:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:20.670 16:33:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:20.670 16:33:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:20.670 16:33:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:20.670 16:33:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:20.670 16:33:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:20.670 16:33:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.670 16:33:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.928 16:33:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:20.928 "name": "Existed_Raid", 00:16:20.928 "uuid": "d6351d70-2d9d-4232-8b15-1c91e58d2bd1", 00:16:20.928 "strip_size_kb": 64, 00:16:20.928 "state": "configuring", 00:16:20.928 "raid_level": "concat", 00:16:20.928 "superblock": true, 00:16:20.928 "num_base_bdevs": 3, 00:16:20.928 "num_base_bdevs_discovered": 1, 00:16:20.928 "num_base_bdevs_operational": 3, 00:16:20.928 "base_bdevs_list": [ 00:16:20.928 { 00:16:20.928 "name": "BaseBdev1", 00:16:20.928 "uuid": "168e64a6-fdaf-492e-bb8c-a9eb790a8867", 00:16:20.928 "is_configured": true, 00:16:20.928 "data_offset": 2048, 00:16:20.928 "data_size": 63488 00:16:20.928 }, 00:16:20.928 { 00:16:20.928 "name": "BaseBdev2", 00:16:20.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.928 "is_configured": false, 00:16:20.928 "data_offset": 0, 00:16:20.928 "data_size": 0 00:16:20.928 }, 00:16:20.928 { 00:16:20.928 "name": "BaseBdev3", 00:16:20.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.929 "is_configured": false, 00:16:20.929 "data_offset": 0, 00:16:20.929 "data_size": 0 00:16:20.929 } 00:16:20.929 ] 00:16:20.929 }' 00:16:20.929 16:33:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:20.929 16:33:52 -- common/autotest_common.sh@10 -- # set +x 00:16:21.861 16:33:52 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:21.861 [2024-07-13 16:33:53.242570] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:21.861 [2024-07-13 16:33:53.242652] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:21.861 16:33:53 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:21.861 16:33:53 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:22.118 16:33:53 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:22.377 BaseBdev1 00:16:22.377 16:33:53 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:22.377 16:33:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:22.377 16:33:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:22.377 16:33:53 -- common/autotest_common.sh@889 -- # local i 00:16:22.377 16:33:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:22.377 16:33:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:22.377 16:33:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:22.635 16:33:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.894 [ 00:16:22.894 { 00:16:22.894 "name": "BaseBdev1", 00:16:22.894 "aliases": [ 00:16:22.894 "112ae1ca-deec-46c1-813d-f020ef7b7c6d" 00:16:22.894 ], 00:16:22.894 "product_name": "Malloc disk", 00:16:22.894 "block_size": 512, 00:16:22.894 "num_blocks": 65536, 00:16:22.894 "uuid": "112ae1ca-deec-46c1-813d-f020ef7b7c6d", 00:16:22.894 "assigned_rate_limits": { 00:16:22.894 "rw_ios_per_sec": 0, 00:16:22.894 "rw_mbytes_per_sec": 0, 00:16:22.894 "r_mbytes_per_sec": 0, 00:16:22.894 "w_mbytes_per_sec": 0 00:16:22.894 }, 00:16:22.894 "claimed": false, 00:16:22.894 "zoned": false, 00:16:22.894 "supported_io_types": { 00:16:22.894 "read": true, 00:16:22.894 "write": true, 00:16:22.894 "unmap": true, 00:16:22.894 "write_zeroes": true, 00:16:22.894 "flush": true, 00:16:22.894 "reset": true, 00:16:22.894 "compare": false, 00:16:22.894 "compare_and_write": false, 00:16:22.894 "abort": true, 00:16:22.894 "nvme_admin": false, 00:16:22.894 "nvme_io": false 00:16:22.894 }, 00:16:22.894 "memory_domains": [ 00:16:22.894 { 00:16:22.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.894 "dma_device_type": 2 00:16:22.894 } 00:16:22.894 ], 00:16:22.894 "driver_specific": {} 00:16:22.894 } 00:16:22.894 ] 00:16:22.894 16:33:54 -- common/autotest_common.sh@895 -- # return 0 00:16:22.894 16:33:54 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:23.152 [2024-07-13 16:33:54.375631] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.152 [2024-07-13 16:33:54.378204] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.152 [2024-07-13 16:33:54.378276] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.152 [2024-07-13 16:33:54.378288] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.152 [2024-07-13 16:33:54.378316] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.152 16:33:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.410 16:33:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.410 "name": "Existed_Raid", 00:16:23.410 "uuid": "e8d44485-5ac2-44b0-9c9c-3f60fffc2d40", 00:16:23.410 "strip_size_kb": 64, 00:16:23.410 "state": "configuring", 00:16:23.410 "raid_level": "concat", 00:16:23.410 "superblock": true, 00:16:23.410 "num_base_bdevs": 3, 00:16:23.410 "num_base_bdevs_discovered": 1, 00:16:23.410 "num_base_bdevs_operational": 3, 00:16:23.410 "base_bdevs_list": [ 00:16:23.410 { 00:16:23.410 "name": "BaseBdev1", 00:16:23.410 "uuid": "112ae1ca-deec-46c1-813d-f020ef7b7c6d", 00:16:23.410 "is_configured": true, 00:16:23.410 "data_offset": 2048, 00:16:23.410 "data_size": 63488 00:16:23.410 }, 00:16:23.410 { 00:16:23.410 "name": "BaseBdev2", 00:16:23.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.410 "is_configured": false, 00:16:23.410 "data_offset": 0, 00:16:23.410 "data_size": 0 00:16:23.410 }, 00:16:23.410 { 00:16:23.410 "name": "BaseBdev3", 00:16:23.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.410 "is_configured": false, 00:16:23.410 "data_offset": 0, 00:16:23.410 "data_size": 0 00:16:23.410 } 00:16:23.410 ] 00:16:23.410 }' 00:16:23.410 16:33:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.410 16:33:54 -- common/autotest_common.sh@10 -- # set +x 00:16:23.978 16:33:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:24.237 [2024-07-13 16:33:55.466168] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.237 BaseBdev2 00:16:24.237 16:33:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:24.237 16:33:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:24.237 16:33:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:24.237 16:33:55 -- common/autotest_common.sh@889 -- # local i 00:16:24.237 16:33:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:24.237 16:33:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:24.237 16:33:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:24.495 16:33:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:24.753 [ 00:16:24.754 { 00:16:24.754 "name": "BaseBdev2", 00:16:24.754 "aliases": [ 00:16:24.754 "be72f18a-4ab1-4710-8e59-f1d58d95814e" 00:16:24.754 ], 00:16:24.754 "product_name": "Malloc disk", 00:16:24.754 "block_size": 512, 00:16:24.754 "num_blocks": 65536, 00:16:24.754 "uuid": "be72f18a-4ab1-4710-8e59-f1d58d95814e", 00:16:24.754 "assigned_rate_limits": { 00:16:24.754 "rw_ios_per_sec": 0, 00:16:24.754 "rw_mbytes_per_sec": 0, 00:16:24.754 "r_mbytes_per_sec": 0, 00:16:24.754 "w_mbytes_per_sec": 0 00:16:24.754 }, 00:16:24.754 "claimed": true, 00:16:24.754 "claim_type": "exclusive_write", 00:16:24.754 "zoned": false, 00:16:24.754 "supported_io_types": { 00:16:24.754 "read": true, 00:16:24.754 "write": true, 00:16:24.754 "unmap": true, 00:16:24.754 "write_zeroes": true, 00:16:24.754 "flush": true, 00:16:24.754 "reset": true, 00:16:24.754 "compare": false, 00:16:24.754 "compare_and_write": false, 00:16:24.754 "abort": true, 00:16:24.754 "nvme_admin": false, 00:16:24.754 "nvme_io": false 00:16:24.754 }, 00:16:24.754 "memory_domains": [ 00:16:24.754 { 00:16:24.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.754 "dma_device_type": 2 00:16:24.754 } 00:16:24.754 ], 00:16:24.754 "driver_specific": {} 00:16:24.754 } 00:16:24.754 ] 00:16:24.754 16:33:56 -- common/autotest_common.sh@895 -- # return 0 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.754 16:33:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.013 16:33:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:25.013 "name": "Existed_Raid", 00:16:25.013 "uuid": "e8d44485-5ac2-44b0-9c9c-3f60fffc2d40", 00:16:25.013 "strip_size_kb": 64, 00:16:25.013 "state": "configuring", 00:16:25.013 "raid_level": "concat", 00:16:25.013 "superblock": true, 00:16:25.013 "num_base_bdevs": 3, 00:16:25.013 "num_base_bdevs_discovered": 2, 00:16:25.013 "num_base_bdevs_operational": 3, 00:16:25.013 "base_bdevs_list": [ 00:16:25.013 { 00:16:25.013 "name": "BaseBdev1", 00:16:25.013 "uuid": "112ae1ca-deec-46c1-813d-f020ef7b7c6d", 00:16:25.013 "is_configured": true, 00:16:25.013 "data_offset": 2048, 00:16:25.013 "data_size": 63488 00:16:25.013 }, 00:16:25.013 { 00:16:25.013 "name": "BaseBdev2", 00:16:25.013 "uuid": "be72f18a-4ab1-4710-8e59-f1d58d95814e", 00:16:25.013 "is_configured": true, 00:16:25.013 "data_offset": 2048, 00:16:25.013 "data_size": 63488 00:16:25.013 }, 00:16:25.013 { 00:16:25.013 "name": "BaseBdev3", 00:16:25.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.013 "is_configured": false, 00:16:25.013 "data_offset": 0, 00:16:25.013 "data_size": 0 00:16:25.013 } 00:16:25.013 ] 00:16:25.013 }' 00:16:25.013 16:33:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:25.013 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:16:25.581 16:33:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:25.581 [2024-07-13 16:33:57.036162] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.581 [2024-07-13 16:33:57.036430] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:16:25.581 [2024-07-13 16:33:57.036443] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:25.581 [2024-07-13 16:33:57.036584] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:25.581 [2024-07-13 16:33:57.037003] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:16:25.581 [2024-07-13 16:33:57.037015] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:16:25.581 [2024-07-13 16:33:57.037157] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.581 BaseBdev3 00:16:25.840 16:33:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:25.840 16:33:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:25.840 16:33:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:25.840 16:33:57 -- common/autotest_common.sh@889 -- # local i 00:16:25.840 16:33:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:25.840 16:33:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:25.840 16:33:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:26.099 16:33:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:26.099 [ 00:16:26.099 { 00:16:26.099 "name": "BaseBdev3", 00:16:26.099 "aliases": [ 00:16:26.099 "73aab69b-546f-4283-89e9-ec84f9700a04" 00:16:26.099 ], 00:16:26.099 "product_name": "Malloc disk", 00:16:26.099 "block_size": 512, 00:16:26.099 "num_blocks": 65536, 00:16:26.099 "uuid": "73aab69b-546f-4283-89e9-ec84f9700a04", 00:16:26.099 "assigned_rate_limits": { 00:16:26.099 "rw_ios_per_sec": 0, 00:16:26.099 "rw_mbytes_per_sec": 0, 00:16:26.099 "r_mbytes_per_sec": 0, 00:16:26.099 "w_mbytes_per_sec": 0 00:16:26.099 }, 00:16:26.099 "claimed": true, 00:16:26.099 "claim_type": "exclusive_write", 00:16:26.099 "zoned": false, 00:16:26.099 "supported_io_types": { 00:16:26.099 "read": true, 00:16:26.099 "write": true, 00:16:26.099 "unmap": true, 00:16:26.099 "write_zeroes": true, 00:16:26.099 "flush": true, 00:16:26.099 "reset": true, 00:16:26.099 "compare": false, 00:16:26.099 "compare_and_write": false, 00:16:26.099 "abort": true, 00:16:26.099 "nvme_admin": false, 00:16:26.099 "nvme_io": false 00:16:26.099 }, 00:16:26.099 "memory_domains": [ 00:16:26.099 { 00:16:26.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.099 "dma_device_type": 2 00:16:26.099 } 00:16:26.099 ], 00:16:26.099 "driver_specific": {} 00:16:26.099 } 00:16:26.099 ] 00:16:26.099 16:33:57 -- common/autotest_common.sh@895 -- # return 0 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.099 16:33:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.359 16:33:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:26.359 "name": "Existed_Raid", 00:16:26.359 "uuid": "e8d44485-5ac2-44b0-9c9c-3f60fffc2d40", 00:16:26.359 "strip_size_kb": 64, 00:16:26.359 "state": "online", 00:16:26.359 "raid_level": "concat", 00:16:26.359 "superblock": true, 00:16:26.359 "num_base_bdevs": 3, 00:16:26.359 "num_base_bdevs_discovered": 3, 00:16:26.359 "num_base_bdevs_operational": 3, 00:16:26.359 "base_bdevs_list": [ 00:16:26.359 { 00:16:26.359 "name": "BaseBdev1", 00:16:26.359 "uuid": "112ae1ca-deec-46c1-813d-f020ef7b7c6d", 00:16:26.359 "is_configured": true, 00:16:26.359 "data_offset": 2048, 00:16:26.359 "data_size": 63488 00:16:26.359 }, 00:16:26.359 { 00:16:26.359 "name": "BaseBdev2", 00:16:26.359 "uuid": "be72f18a-4ab1-4710-8e59-f1d58d95814e", 00:16:26.359 "is_configured": true, 00:16:26.359 "data_offset": 2048, 00:16:26.359 "data_size": 63488 00:16:26.359 }, 00:16:26.359 { 00:16:26.359 "name": "BaseBdev3", 00:16:26.359 "uuid": "73aab69b-546f-4283-89e9-ec84f9700a04", 00:16:26.359 "is_configured": true, 00:16:26.359 "data_offset": 2048, 00:16:26.359 "data_size": 63488 00:16:26.359 } 00:16:26.359 ] 00:16:26.359 }' 00:16:26.359 16:33:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:26.359 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:16:26.928 16:33:58 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:27.187 [2024-07-13 16:33:58.524634] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.187 [2024-07-13 16:33:58.524689] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.187 [2024-07-13 16:33:58.524763] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.187 16:33:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.447 16:33:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:27.447 "name": "Existed_Raid", 00:16:27.447 "uuid": "e8d44485-5ac2-44b0-9c9c-3f60fffc2d40", 00:16:27.447 "strip_size_kb": 64, 00:16:27.447 "state": "offline", 00:16:27.447 "raid_level": "concat", 00:16:27.447 "superblock": true, 00:16:27.447 "num_base_bdevs": 3, 00:16:27.447 "num_base_bdevs_discovered": 2, 00:16:27.447 "num_base_bdevs_operational": 2, 00:16:27.447 "base_bdevs_list": [ 00:16:27.447 { 00:16:27.447 "name": null, 00:16:27.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.447 "is_configured": false, 00:16:27.447 "data_offset": 2048, 00:16:27.447 "data_size": 63488 00:16:27.447 }, 00:16:27.447 { 00:16:27.447 "name": "BaseBdev2", 00:16:27.447 "uuid": "be72f18a-4ab1-4710-8e59-f1d58d95814e", 00:16:27.447 "is_configured": true, 00:16:27.447 "data_offset": 2048, 00:16:27.447 "data_size": 63488 00:16:27.447 }, 00:16:27.447 { 00:16:27.447 "name": "BaseBdev3", 00:16:27.447 "uuid": "73aab69b-546f-4283-89e9-ec84f9700a04", 00:16:27.447 "is_configured": true, 00:16:27.447 "data_offset": 2048, 00:16:27.447 "data_size": 63488 00:16:27.447 } 00:16:27.447 ] 00:16:27.447 }' 00:16:27.447 16:33:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:27.447 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.016 16:33:59 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:28.016 16:33:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:28.016 16:33:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.016 16:33:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:28.275 16:33:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:28.275 16:33:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.275 16:33:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:28.545 [2024-07-13 16:33:59.870027] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.545 16:33:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:28.545 16:33:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:28.545 16:33:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.545 16:33:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:28.858 16:34:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:28.858 16:34:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.858 16:34:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:29.117 [2024-07-13 16:34:00.375541] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:29.117 [2024-07-13 16:34:00.375867] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:16:29.117 16:34:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:29.117 16:34:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:29.117 16:34:00 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.117 16:34:00 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:29.376 16:34:00 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:29.376 16:34:00 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:29.376 16:34:00 -- bdev/bdev_raid.sh@287 -- # killprocess 126984 00:16:29.376 16:34:00 -- common/autotest_common.sh@926 -- # '[' -z 126984 ']' 00:16:29.376 16:34:00 -- common/autotest_common.sh@930 -- # kill -0 126984 00:16:29.376 16:34:00 -- common/autotest_common.sh@931 -- # uname 00:16:29.376 16:34:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:29.376 16:34:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126984 00:16:29.376 16:34:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:29.376 16:34:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:29.376 16:34:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126984' 00:16:29.376 killing process with pid 126984 00:16:29.376 16:34:00 -- common/autotest_common.sh@945 -- # kill 126984 00:16:29.376 [2024-07-13 16:34:00.720973] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:29.376 16:34:00 -- common/autotest_common.sh@950 -- # wait 126984 00:16:29.376 [2024-07-13 16:34:00.721269] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:29.944 00:16:29.944 real 0m12.369s 00:16:29.944 user 0m21.931s 00:16:29.944 sys 0m2.108s 00:16:29.944 16:34:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.944 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:16:29.944 ************************************ 00:16:29.944 END TEST raid_state_function_test_sb 00:16:29.944 ************************************ 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:16:29.944 16:34:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:29.944 16:34:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:29.944 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:16:29.944 ************************************ 00:16:29.944 START TEST raid_superblock_test 00:16:29.944 ************************************ 00:16:29.944 16:34:01 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@357 -- # raid_pid=127372 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@358 -- # waitforlisten 127372 /var/tmp/spdk-raid.sock 00:16:29.944 16:34:01 -- common/autotest_common.sh@819 -- # '[' -z 127372 ']' 00:16:29.944 16:34:01 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:29.944 16:34:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:29.944 16:34:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:29.944 16:34:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:29.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:29.944 16:34:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:29.944 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:16:29.944 [2024-07-13 16:34:01.283547] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:29.944 [2024-07-13 16:34:01.284801] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127372 ] 00:16:30.203 [2024-07-13 16:34:01.436606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.203 [2024-07-13 16:34:01.517340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.203 [2024-07-13 16:34:01.596024] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.771 16:34:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:30.771 16:34:02 -- common/autotest_common.sh@852 -- # return 0 00:16:30.771 16:34:02 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:30.771 16:34:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:30.771 16:34:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:30.771 16:34:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:30.771 16:34:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:30.771 16:34:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.771 16:34:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.771 16:34:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.771 16:34:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:31.031 malloc1 00:16:31.031 16:34:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:31.290 [2024-07-13 16:34:02.719157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:31.290 [2024-07-13 16:34:02.719569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.290 [2024-07-13 16:34:02.719661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:16:31.290 [2024-07-13 16:34:02.719800] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.290 [2024-07-13 16:34:02.722860] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.290 [2024-07-13 16:34:02.723043] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:31.290 pt1 00:16:31.290 16:34:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:31.290 16:34:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:31.290 16:34:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:31.290 16:34:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:31.290 16:34:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:31.290 16:34:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:31.290 16:34:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:31.290 16:34:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:31.290 16:34:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:31.548 malloc2 00:16:31.805 16:34:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:31.806 [2024-07-13 16:34:03.247333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:31.806 [2024-07-13 16:34:03.247712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.806 [2024-07-13 16:34:03.247792] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:31.806 [2024-07-13 16:34:03.247946] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.806 [2024-07-13 16:34:03.250806] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.806 [2024-07-13 16:34:03.250967] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:31.806 pt2 00:16:31.806 16:34:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:31.806 16:34:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:31.806 16:34:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:31.806 16:34:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:31.806 16:34:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:31.806 16:34:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:31.806 16:34:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:31.806 16:34:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:31.806 16:34:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:32.064 malloc3 00:16:32.064 16:34:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:32.321 [2024-07-13 16:34:03.690130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:32.321 [2024-07-13 16:34:03.690508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.321 [2024-07-13 16:34:03.690593] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:32.321 [2024-07-13 16:34:03.690720] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.321 [2024-07-13 16:34:03.693683] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.321 [2024-07-13 16:34:03.693859] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:32.321 pt3 00:16:32.321 16:34:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:32.321 16:34:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:32.321 16:34:03 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:32.578 [2024-07-13 16:34:03.942374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:32.578 [2024-07-13 16:34:03.945132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.578 [2024-07-13 16:34:03.945341] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:32.578 [2024-07-13 16:34:03.945602] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:32.578 [2024-07-13 16:34:03.945783] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:32.578 [2024-07-13 16:34:03.946032] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:16:32.578 [2024-07-13 16:34:03.946530] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:32.578 [2024-07-13 16:34:03.946637] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:16:32.578 [2024-07-13 16:34:03.946941] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.578 16:34:03 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:32.578 16:34:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:32.578 16:34:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:32.578 16:34:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:32.578 16:34:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:32.578 16:34:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:32.578 16:34:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:32.578 16:34:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:32.578 16:34:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:32.578 16:34:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:32.578 16:34:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.578 16:34:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.837 16:34:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.837 "name": "raid_bdev1", 00:16:32.837 "uuid": "751ae657-4b47-4439-991c-ec8308887889", 00:16:32.837 "strip_size_kb": 64, 00:16:32.837 "state": "online", 00:16:32.837 "raid_level": "concat", 00:16:32.837 "superblock": true, 00:16:32.837 "num_base_bdevs": 3, 00:16:32.837 "num_base_bdevs_discovered": 3, 00:16:32.837 "num_base_bdevs_operational": 3, 00:16:32.837 "base_bdevs_list": [ 00:16:32.837 { 00:16:32.837 "name": "pt1", 00:16:32.837 "uuid": "c1a4feeb-1314-5ddc-be4e-aad489f2ab60", 00:16:32.837 "is_configured": true, 00:16:32.837 "data_offset": 2048, 00:16:32.837 "data_size": 63488 00:16:32.837 }, 00:16:32.837 { 00:16:32.837 "name": "pt2", 00:16:32.837 "uuid": "c54b8ec4-04db-525d-a286-6ea75eb7f00b", 00:16:32.837 "is_configured": true, 00:16:32.837 "data_offset": 2048, 00:16:32.837 "data_size": 63488 00:16:32.837 }, 00:16:32.837 { 00:16:32.837 "name": "pt3", 00:16:32.837 "uuid": "b80828de-1dc9-591c-87d8-08260586776e", 00:16:32.837 "is_configured": true, 00:16:32.837 "data_offset": 2048, 00:16:32.837 "data_size": 63488 00:16:32.837 } 00:16:32.837 ] 00:16:32.837 }' 00:16:32.837 16:34:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.837 16:34:04 -- common/autotest_common.sh@10 -- # set +x 00:16:33.404 16:34:04 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:33.404 16:34:04 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:33.663 [2024-07-13 16:34:05.003298] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.663 16:34:05 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=751ae657-4b47-4439-991c-ec8308887889 00:16:33.663 16:34:05 -- bdev/bdev_raid.sh@380 -- # '[' -z 751ae657-4b47-4439-991c-ec8308887889 ']' 00:16:33.663 16:34:05 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:33.922 [2024-07-13 16:34:05.255138] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.922 [2024-07-13 16:34:05.255408] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.922 [2024-07-13 16:34:05.255656] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.922 [2024-07-13 16:34:05.255851] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.922 [2024-07-13 16:34:05.255929] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:16:33.922 16:34:05 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.922 16:34:05 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:34.181 16:34:05 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:34.181 16:34:05 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:34.181 16:34:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.181 16:34:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:34.441 16:34:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.441 16:34:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:34.699 16:34:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.700 16:34:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:34.700 16:34:06 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:34.700 16:34:06 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:34.959 16:34:06 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:34.959 16:34:06 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:34.959 16:34:06 -- common/autotest_common.sh@640 -- # local es=0 00:16:34.959 16:34:06 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:34.959 16:34:06 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:34.959 16:34:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:34.959 16:34:06 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:34.959 16:34:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:34.959 16:34:06 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:34.959 16:34:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:34.959 16:34:06 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:34.959 16:34:06 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:34.959 16:34:06 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:35.217 [2024-07-13 16:34:06.547320] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:35.217 [2024-07-13 16:34:06.550093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:35.217 [2024-07-13 16:34:06.550271] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:35.217 [2024-07-13 16:34:06.550359] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:35.217 [2024-07-13 16:34:06.550531] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:35.217 [2024-07-13 16:34:06.550638] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:35.217 [2024-07-13 16:34:06.550718] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:35.217 [2024-07-13 16:34:06.550799] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:16:35.217 request: 00:16:35.217 { 00:16:35.217 "name": "raid_bdev1", 00:16:35.217 "raid_level": "concat", 00:16:35.217 "base_bdevs": [ 00:16:35.217 "malloc1", 00:16:35.217 "malloc2", 00:16:35.217 "malloc3" 00:16:35.217 ], 00:16:35.217 "superblock": false, 00:16:35.217 "strip_size_kb": 64, 00:16:35.217 "method": "bdev_raid_create", 00:16:35.217 "req_id": 1 00:16:35.217 } 00:16:35.217 Got JSON-RPC error response 00:16:35.217 response: 00:16:35.217 { 00:16:35.217 "code": -17, 00:16:35.217 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:35.217 } 00:16:35.217 16:34:06 -- common/autotest_common.sh@643 -- # es=1 00:16:35.217 16:34:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:35.217 16:34:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:35.217 16:34:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:35.217 16:34:06 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.217 16:34:06 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:35.475 16:34:06 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:35.475 16:34:06 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:35.475 16:34:06 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:35.733 [2024-07-13 16:34:07.159362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:35.733 [2024-07-13 16:34:07.159656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.733 [2024-07-13 16:34:07.159737] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:35.733 [2024-07-13 16:34:07.159848] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.733 [2024-07-13 16:34:07.162931] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.733 [2024-07-13 16:34:07.163087] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:35.733 [2024-07-13 16:34:07.163335] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:35.733 [2024-07-13 16:34:07.163507] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:35.733 pt1 00:16:35.733 16:34:07 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:35.733 16:34:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:35.734 16:34:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:35.734 16:34:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:35.734 16:34:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:35.734 16:34:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:35.734 16:34:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.734 16:34:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.734 16:34:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.734 16:34:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.734 16:34:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.734 16:34:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.992 16:34:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.992 "name": "raid_bdev1", 00:16:35.992 "uuid": "751ae657-4b47-4439-991c-ec8308887889", 00:16:35.992 "strip_size_kb": 64, 00:16:35.992 "state": "configuring", 00:16:35.992 "raid_level": "concat", 00:16:35.992 "superblock": true, 00:16:35.992 "num_base_bdevs": 3, 00:16:35.992 "num_base_bdevs_discovered": 1, 00:16:35.992 "num_base_bdevs_operational": 3, 00:16:35.992 "base_bdevs_list": [ 00:16:35.992 { 00:16:35.992 "name": "pt1", 00:16:35.992 "uuid": "c1a4feeb-1314-5ddc-be4e-aad489f2ab60", 00:16:35.992 "is_configured": true, 00:16:35.992 "data_offset": 2048, 00:16:35.992 "data_size": 63488 00:16:35.992 }, 00:16:35.992 { 00:16:35.992 "name": null, 00:16:35.992 "uuid": "c54b8ec4-04db-525d-a286-6ea75eb7f00b", 00:16:35.992 "is_configured": false, 00:16:35.992 "data_offset": 2048, 00:16:35.992 "data_size": 63488 00:16:35.992 }, 00:16:35.992 { 00:16:35.992 "name": null, 00:16:35.992 "uuid": "b80828de-1dc9-591c-87d8-08260586776e", 00:16:35.992 "is_configured": false, 00:16:35.992 "data_offset": 2048, 00:16:35.992 "data_size": 63488 00:16:35.992 } 00:16:35.992 ] 00:16:35.992 }' 00:16:35.992 16:34:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.992 16:34:07 -- common/autotest_common.sh@10 -- # set +x 00:16:36.559 16:34:07 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:36.559 16:34:07 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:36.817 [2024-07-13 16:34:08.227636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:36.817 [2024-07-13 16:34:08.227932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.817 [2024-07-13 16:34:08.228097] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:36.817 [2024-07-13 16:34:08.228218] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.817 [2024-07-13 16:34:08.228794] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.817 [2024-07-13 16:34:08.228933] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:36.817 [2024-07-13 16:34:08.229135] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:36.817 [2024-07-13 16:34:08.229271] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:36.817 pt2 00:16:36.817 16:34:08 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:37.074 [2024-07-13 16:34:08.499727] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:37.074 16:34:08 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:37.074 16:34:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:37.074 16:34:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:37.074 16:34:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:37.074 16:34:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:37.074 16:34:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:37.074 16:34:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:37.074 16:34:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:37.074 16:34:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:37.074 16:34:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:37.074 16:34:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.074 16:34:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.333 16:34:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:37.333 "name": "raid_bdev1", 00:16:37.333 "uuid": "751ae657-4b47-4439-991c-ec8308887889", 00:16:37.333 "strip_size_kb": 64, 00:16:37.333 "state": "configuring", 00:16:37.333 "raid_level": "concat", 00:16:37.333 "superblock": true, 00:16:37.333 "num_base_bdevs": 3, 00:16:37.333 "num_base_bdevs_discovered": 1, 00:16:37.333 "num_base_bdevs_operational": 3, 00:16:37.333 "base_bdevs_list": [ 00:16:37.333 { 00:16:37.333 "name": "pt1", 00:16:37.333 "uuid": "c1a4feeb-1314-5ddc-be4e-aad489f2ab60", 00:16:37.333 "is_configured": true, 00:16:37.333 "data_offset": 2048, 00:16:37.333 "data_size": 63488 00:16:37.333 }, 00:16:37.333 { 00:16:37.333 "name": null, 00:16:37.333 "uuid": "c54b8ec4-04db-525d-a286-6ea75eb7f00b", 00:16:37.333 "is_configured": false, 00:16:37.333 "data_offset": 2048, 00:16:37.333 "data_size": 63488 00:16:37.333 }, 00:16:37.333 { 00:16:37.333 "name": null, 00:16:37.333 "uuid": "b80828de-1dc9-591c-87d8-08260586776e", 00:16:37.333 "is_configured": false, 00:16:37.333 "data_offset": 2048, 00:16:37.333 "data_size": 63488 00:16:37.333 } 00:16:37.333 ] 00:16:37.333 }' 00:16:37.333 16:34:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:37.333 16:34:08 -- common/autotest_common.sh@10 -- # set +x 00:16:38.268 16:34:09 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:38.269 16:34:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:38.269 16:34:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:38.269 [2024-07-13 16:34:09.607897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:38.269 [2024-07-13 16:34:09.608283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.269 [2024-07-13 16:34:09.608380] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:38.269 [2024-07-13 16:34:09.608495] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.269 [2024-07-13 16:34:09.609048] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.269 [2024-07-13 16:34:09.609216] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:38.269 [2024-07-13 16:34:09.609447] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:38.269 [2024-07-13 16:34:09.609554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.269 pt2 00:16:38.269 16:34:09 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:38.269 16:34:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:38.269 16:34:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:38.527 [2024-07-13 16:34:09.803954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:38.527 [2024-07-13 16:34:09.804329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.527 [2024-07-13 16:34:09.804408] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:38.527 [2024-07-13 16:34:09.804511] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.527 [2024-07-13 16:34:09.805149] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.527 [2024-07-13 16:34:09.805345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:38.527 [2024-07-13 16:34:09.805548] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:38.527 [2024-07-13 16:34:09.805656] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:38.527 [2024-07-13 16:34:09.805822] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:38.527 [2024-07-13 16:34:09.805950] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:38.527 [2024-07-13 16:34:09.806089] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:16:38.527 [2024-07-13 16:34:09.806615] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:38.527 [2024-07-13 16:34:09.806714] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:38.527 [2024-07-13 16:34:09.806914] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.527 pt3 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.527 16:34:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.785 16:34:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.785 "name": "raid_bdev1", 00:16:38.785 "uuid": "751ae657-4b47-4439-991c-ec8308887889", 00:16:38.785 "strip_size_kb": 64, 00:16:38.785 "state": "online", 00:16:38.785 "raid_level": "concat", 00:16:38.785 "superblock": true, 00:16:38.785 "num_base_bdevs": 3, 00:16:38.785 "num_base_bdevs_discovered": 3, 00:16:38.785 "num_base_bdevs_operational": 3, 00:16:38.785 "base_bdevs_list": [ 00:16:38.785 { 00:16:38.785 "name": "pt1", 00:16:38.785 "uuid": "c1a4feeb-1314-5ddc-be4e-aad489f2ab60", 00:16:38.785 "is_configured": true, 00:16:38.785 "data_offset": 2048, 00:16:38.785 "data_size": 63488 00:16:38.785 }, 00:16:38.785 { 00:16:38.785 "name": "pt2", 00:16:38.785 "uuid": "c54b8ec4-04db-525d-a286-6ea75eb7f00b", 00:16:38.785 "is_configured": true, 00:16:38.785 "data_offset": 2048, 00:16:38.785 "data_size": 63488 00:16:38.785 }, 00:16:38.785 { 00:16:38.785 "name": "pt3", 00:16:38.785 "uuid": "b80828de-1dc9-591c-87d8-08260586776e", 00:16:38.785 "is_configured": true, 00:16:38.785 "data_offset": 2048, 00:16:38.785 "data_size": 63488 00:16:38.785 } 00:16:38.785 ] 00:16:38.785 }' 00:16:38.785 16:34:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.785 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:16:39.351 16:34:10 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:39.351 16:34:10 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:39.610 [2024-07-13 16:34:10.872423] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.610 16:34:10 -- bdev/bdev_raid.sh@430 -- # '[' 751ae657-4b47-4439-991c-ec8308887889 '!=' 751ae657-4b47-4439-991c-ec8308887889 ']' 00:16:39.610 16:34:10 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:16:39.610 16:34:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:39.610 16:34:10 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:39.610 16:34:10 -- bdev/bdev_raid.sh@511 -- # killprocess 127372 00:16:39.610 16:34:10 -- common/autotest_common.sh@926 -- # '[' -z 127372 ']' 00:16:39.610 16:34:10 -- common/autotest_common.sh@930 -- # kill -0 127372 00:16:39.610 16:34:10 -- common/autotest_common.sh@931 -- # uname 00:16:39.610 16:34:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:39.610 16:34:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127372 00:16:39.610 16:34:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:39.610 16:34:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:39.610 16:34:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127372' 00:16:39.610 killing process with pid 127372 00:16:39.610 16:34:10 -- common/autotest_common.sh@945 -- # kill 127372 00:16:39.610 [2024-07-13 16:34:10.926444] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.610 16:34:10 -- common/autotest_common.sh@950 -- # wait 127372 00:16:39.610 [2024-07-13 16:34:10.926655] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.610 [2024-07-13 16:34:10.926795] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.610 [2024-07-13 16:34:10.926832] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:39.610 [2024-07-13 16:34:10.994471] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.178 16:34:11 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:40.178 00:16:40.178 real 0m10.185s 00:16:40.178 user 0m17.656s 00:16:40.178 sys 0m1.881s 00:16:40.178 16:34:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.178 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:16:40.178 ************************************ 00:16:40.178 END TEST raid_superblock_test 00:16:40.178 ************************************ 00:16:40.178 16:34:11 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:40.178 16:34:11 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:40.178 16:34:11 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:40.178 16:34:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:40.178 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:16:40.178 ************************************ 00:16:40.179 START TEST raid_state_function_test 00:16:40.179 ************************************ 00:16:40.179 16:34:11 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=127671 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127671' 00:16:40.179 Process raid pid: 127671 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:40.179 16:34:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127671 /var/tmp/spdk-raid.sock 00:16:40.179 16:34:11 -- common/autotest_common.sh@819 -- # '[' -z 127671 ']' 00:16:40.179 16:34:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:40.179 16:34:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:40.179 16:34:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:40.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:40.179 16:34:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:40.179 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:16:40.179 [2024-07-13 16:34:11.546487] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:40.179 [2024-07-13 16:34:11.547033] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.451 [2024-07-13 16:34:11.703365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.451 [2024-07-13 16:34:11.783650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.451 [2024-07-13 16:34:11.862513] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.019 16:34:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:41.019 16:34:12 -- common/autotest_common.sh@852 -- # return 0 00:16:41.019 16:34:12 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:41.278 [2024-07-13 16:34:12.663102] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.278 [2024-07-13 16:34:12.663477] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.278 [2024-07-13 16:34:12.663586] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.278 [2024-07-13 16:34:12.663640] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.278 [2024-07-13 16:34:12.663666] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:41.278 [2024-07-13 16:34:12.663738] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:41.278 16:34:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:41.278 16:34:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.278 16:34:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:41.278 16:34:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:41.278 16:34:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:41.278 16:34:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:41.278 16:34:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.278 16:34:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.278 16:34:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.278 16:34:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.278 16:34:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.278 16:34:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.550 16:34:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.550 "name": "Existed_Raid", 00:16:41.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.550 "strip_size_kb": 0, 00:16:41.550 "state": "configuring", 00:16:41.550 "raid_level": "raid1", 00:16:41.550 "superblock": false, 00:16:41.550 "num_base_bdevs": 3, 00:16:41.550 "num_base_bdevs_discovered": 0, 00:16:41.550 "num_base_bdevs_operational": 3, 00:16:41.550 "base_bdevs_list": [ 00:16:41.550 { 00:16:41.550 "name": "BaseBdev1", 00:16:41.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.550 "is_configured": false, 00:16:41.550 "data_offset": 0, 00:16:41.550 "data_size": 0 00:16:41.550 }, 00:16:41.550 { 00:16:41.550 "name": "BaseBdev2", 00:16:41.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.550 "is_configured": false, 00:16:41.550 "data_offset": 0, 00:16:41.550 "data_size": 0 00:16:41.550 }, 00:16:41.550 { 00:16:41.550 "name": "BaseBdev3", 00:16:41.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.550 "is_configured": false, 00:16:41.550 "data_offset": 0, 00:16:41.550 "data_size": 0 00:16:41.550 } 00:16:41.550 ] 00:16:41.550 }' 00:16:41.550 16:34:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.550 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:16:42.116 16:34:13 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:42.374 [2024-07-13 16:34:13.723179] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.374 [2024-07-13 16:34:13.723507] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:42.374 16:34:13 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:42.632 [2024-07-13 16:34:14.003251] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.632 [2024-07-13 16:34:14.003478] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.632 [2024-07-13 16:34:14.003561] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.632 [2024-07-13 16:34:14.003625] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.632 [2024-07-13 16:34:14.003701] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.632 [2024-07-13 16:34:14.003761] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.632 16:34:14 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:42.891 [2024-07-13 16:34:14.223747] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.891 BaseBdev1 00:16:42.891 16:34:14 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:42.891 16:34:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:42.891 16:34:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:42.891 16:34:14 -- common/autotest_common.sh@889 -- # local i 00:16:42.891 16:34:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:42.891 16:34:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:42.891 16:34:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:43.150 16:34:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:43.409 [ 00:16:43.410 { 00:16:43.410 "name": "BaseBdev1", 00:16:43.410 "aliases": [ 00:16:43.410 "9c59a09a-0282-4d17-9568-8ab586f1a5bf" 00:16:43.410 ], 00:16:43.410 "product_name": "Malloc disk", 00:16:43.410 "block_size": 512, 00:16:43.410 "num_blocks": 65536, 00:16:43.410 "uuid": "9c59a09a-0282-4d17-9568-8ab586f1a5bf", 00:16:43.410 "assigned_rate_limits": { 00:16:43.410 "rw_ios_per_sec": 0, 00:16:43.410 "rw_mbytes_per_sec": 0, 00:16:43.410 "r_mbytes_per_sec": 0, 00:16:43.410 "w_mbytes_per_sec": 0 00:16:43.410 }, 00:16:43.410 "claimed": true, 00:16:43.410 "claim_type": "exclusive_write", 00:16:43.410 "zoned": false, 00:16:43.410 "supported_io_types": { 00:16:43.410 "read": true, 00:16:43.410 "write": true, 00:16:43.410 "unmap": true, 00:16:43.410 "write_zeroes": true, 00:16:43.410 "flush": true, 00:16:43.410 "reset": true, 00:16:43.410 "compare": false, 00:16:43.410 "compare_and_write": false, 00:16:43.410 "abort": true, 00:16:43.410 "nvme_admin": false, 00:16:43.410 "nvme_io": false 00:16:43.410 }, 00:16:43.410 "memory_domains": [ 00:16:43.410 { 00:16:43.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.410 "dma_device_type": 2 00:16:43.410 } 00:16:43.410 ], 00:16:43.410 "driver_specific": {} 00:16:43.410 } 00:16:43.410 ] 00:16:43.410 16:34:14 -- common/autotest_common.sh@895 -- # return 0 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.410 "name": "Existed_Raid", 00:16:43.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.410 "strip_size_kb": 0, 00:16:43.410 "state": "configuring", 00:16:43.410 "raid_level": "raid1", 00:16:43.410 "superblock": false, 00:16:43.410 "num_base_bdevs": 3, 00:16:43.410 "num_base_bdevs_discovered": 1, 00:16:43.410 "num_base_bdevs_operational": 3, 00:16:43.410 "base_bdevs_list": [ 00:16:43.410 { 00:16:43.410 "name": "BaseBdev1", 00:16:43.410 "uuid": "9c59a09a-0282-4d17-9568-8ab586f1a5bf", 00:16:43.410 "is_configured": true, 00:16:43.410 "data_offset": 0, 00:16:43.410 "data_size": 65536 00:16:43.410 }, 00:16:43.410 { 00:16:43.410 "name": "BaseBdev2", 00:16:43.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.410 "is_configured": false, 00:16:43.410 "data_offset": 0, 00:16:43.410 "data_size": 0 00:16:43.410 }, 00:16:43.410 { 00:16:43.410 "name": "BaseBdev3", 00:16:43.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.410 "is_configured": false, 00:16:43.410 "data_offset": 0, 00:16:43.410 "data_size": 0 00:16:43.410 } 00:16:43.410 ] 00:16:43.410 }' 00:16:43.410 16:34:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.410 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:16:43.977 16:34:15 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:44.237 [2024-07-13 16:34:15.652037] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:44.237 [2024-07-13 16:34:15.652371] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:44.237 16:34:15 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:44.237 16:34:15 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:44.497 [2024-07-13 16:34:15.836210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.497 [2024-07-13 16:34:15.839016] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:44.497 [2024-07-13 16:34:15.839212] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:44.497 [2024-07-13 16:34:15.839300] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:44.497 [2024-07-13 16:34:15.839361] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.497 16:34:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.756 16:34:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:44.756 "name": "Existed_Raid", 00:16:44.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.756 "strip_size_kb": 0, 00:16:44.756 "state": "configuring", 00:16:44.756 "raid_level": "raid1", 00:16:44.756 "superblock": false, 00:16:44.756 "num_base_bdevs": 3, 00:16:44.756 "num_base_bdevs_discovered": 1, 00:16:44.756 "num_base_bdevs_operational": 3, 00:16:44.756 "base_bdevs_list": [ 00:16:44.756 { 00:16:44.756 "name": "BaseBdev1", 00:16:44.756 "uuid": "9c59a09a-0282-4d17-9568-8ab586f1a5bf", 00:16:44.756 "is_configured": true, 00:16:44.756 "data_offset": 0, 00:16:44.756 "data_size": 65536 00:16:44.756 }, 00:16:44.756 { 00:16:44.756 "name": "BaseBdev2", 00:16:44.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.756 "is_configured": false, 00:16:44.756 "data_offset": 0, 00:16:44.756 "data_size": 0 00:16:44.756 }, 00:16:44.756 { 00:16:44.756 "name": "BaseBdev3", 00:16:44.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.756 "is_configured": false, 00:16:44.756 "data_offset": 0, 00:16:44.756 "data_size": 0 00:16:44.756 } 00:16:44.756 ] 00:16:44.756 }' 00:16:44.756 16:34:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:44.756 16:34:16 -- common/autotest_common.sh@10 -- # set +x 00:16:45.323 16:34:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:45.583 [2024-07-13 16:34:16.918174] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.583 BaseBdev2 00:16:45.583 16:34:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:45.583 16:34:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:45.583 16:34:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:45.583 16:34:16 -- common/autotest_common.sh@889 -- # local i 00:16:45.583 16:34:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:45.583 16:34:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:45.583 16:34:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:45.842 16:34:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:45.842 [ 00:16:45.842 { 00:16:45.842 "name": "BaseBdev2", 00:16:45.842 "aliases": [ 00:16:45.842 "ff69f906-4747-43ca-91d6-5dffbf904f1f" 00:16:45.842 ], 00:16:45.842 "product_name": "Malloc disk", 00:16:45.842 "block_size": 512, 00:16:45.842 "num_blocks": 65536, 00:16:45.842 "uuid": "ff69f906-4747-43ca-91d6-5dffbf904f1f", 00:16:45.842 "assigned_rate_limits": { 00:16:45.842 "rw_ios_per_sec": 0, 00:16:45.842 "rw_mbytes_per_sec": 0, 00:16:45.842 "r_mbytes_per_sec": 0, 00:16:45.842 "w_mbytes_per_sec": 0 00:16:45.842 }, 00:16:45.842 "claimed": true, 00:16:45.842 "claim_type": "exclusive_write", 00:16:45.842 "zoned": false, 00:16:45.842 "supported_io_types": { 00:16:45.842 "read": true, 00:16:45.842 "write": true, 00:16:45.842 "unmap": true, 00:16:45.842 "write_zeroes": true, 00:16:45.842 "flush": true, 00:16:45.842 "reset": true, 00:16:45.842 "compare": false, 00:16:45.842 "compare_and_write": false, 00:16:45.842 "abort": true, 00:16:45.842 "nvme_admin": false, 00:16:45.842 "nvme_io": false 00:16:45.842 }, 00:16:45.842 "memory_domains": [ 00:16:45.842 { 00:16:45.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.842 "dma_device_type": 2 00:16:45.842 } 00:16:45.842 ], 00:16:45.842 "driver_specific": {} 00:16:45.842 } 00:16:45.842 ] 00:16:46.101 16:34:17 -- common/autotest_common.sh@895 -- # return 0 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.101 16:34:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.360 16:34:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.360 "name": "Existed_Raid", 00:16:46.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.360 "strip_size_kb": 0, 00:16:46.360 "state": "configuring", 00:16:46.360 "raid_level": "raid1", 00:16:46.360 "superblock": false, 00:16:46.360 "num_base_bdevs": 3, 00:16:46.360 "num_base_bdevs_discovered": 2, 00:16:46.360 "num_base_bdevs_operational": 3, 00:16:46.360 "base_bdevs_list": [ 00:16:46.360 { 00:16:46.360 "name": "BaseBdev1", 00:16:46.360 "uuid": "9c59a09a-0282-4d17-9568-8ab586f1a5bf", 00:16:46.360 "is_configured": true, 00:16:46.360 "data_offset": 0, 00:16:46.360 "data_size": 65536 00:16:46.360 }, 00:16:46.360 { 00:16:46.360 "name": "BaseBdev2", 00:16:46.360 "uuid": "ff69f906-4747-43ca-91d6-5dffbf904f1f", 00:16:46.360 "is_configured": true, 00:16:46.360 "data_offset": 0, 00:16:46.360 "data_size": 65536 00:16:46.360 }, 00:16:46.360 { 00:16:46.360 "name": "BaseBdev3", 00:16:46.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.360 "is_configured": false, 00:16:46.360 "data_offset": 0, 00:16:46.360 "data_size": 0 00:16:46.360 } 00:16:46.360 ] 00:16:46.360 }' 00:16:46.360 16:34:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.360 16:34:17 -- common/autotest_common.sh@10 -- # set +x 00:16:46.927 16:34:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:47.186 [2024-07-13 16:34:18.450307] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:47.186 [2024-07-13 16:34:18.450633] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:47.186 [2024-07-13 16:34:18.450679] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:47.186 [2024-07-13 16:34:18.450968] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:16:47.186 [2024-07-13 16:34:18.451473] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:47.186 [2024-07-13 16:34:18.451581] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:16:47.186 [2024-07-13 16:34:18.451956] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.186 BaseBdev3 00:16:47.186 16:34:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:47.186 16:34:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:47.186 16:34:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:47.186 16:34:18 -- common/autotest_common.sh@889 -- # local i 00:16:47.186 16:34:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:47.186 16:34:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:47.186 16:34:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:47.446 16:34:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:47.446 [ 00:16:47.446 { 00:16:47.446 "name": "BaseBdev3", 00:16:47.446 "aliases": [ 00:16:47.446 "4cd6ef88-109c-4d8e-9394-6c124ac9f4c2" 00:16:47.446 ], 00:16:47.446 "product_name": "Malloc disk", 00:16:47.446 "block_size": 512, 00:16:47.446 "num_blocks": 65536, 00:16:47.446 "uuid": "4cd6ef88-109c-4d8e-9394-6c124ac9f4c2", 00:16:47.446 "assigned_rate_limits": { 00:16:47.446 "rw_ios_per_sec": 0, 00:16:47.446 "rw_mbytes_per_sec": 0, 00:16:47.446 "r_mbytes_per_sec": 0, 00:16:47.446 "w_mbytes_per_sec": 0 00:16:47.446 }, 00:16:47.446 "claimed": true, 00:16:47.446 "claim_type": "exclusive_write", 00:16:47.446 "zoned": false, 00:16:47.446 "supported_io_types": { 00:16:47.446 "read": true, 00:16:47.446 "write": true, 00:16:47.446 "unmap": true, 00:16:47.446 "write_zeroes": true, 00:16:47.446 "flush": true, 00:16:47.446 "reset": true, 00:16:47.446 "compare": false, 00:16:47.446 "compare_and_write": false, 00:16:47.446 "abort": true, 00:16:47.446 "nvme_admin": false, 00:16:47.446 "nvme_io": false 00:16:47.446 }, 00:16:47.446 "memory_domains": [ 00:16:47.446 { 00:16:47.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.446 "dma_device_type": 2 00:16:47.446 } 00:16:47.446 ], 00:16:47.446 "driver_specific": {} 00:16:47.446 } 00:16:47.446 ] 00:16:47.446 16:34:18 -- common/autotest_common.sh@895 -- # return 0 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.446 16:34:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.706 16:34:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.706 "name": "Existed_Raid", 00:16:47.706 "uuid": "bc4d7a47-3f4f-43c2-9cc6-6c89c39b1535", 00:16:47.706 "strip_size_kb": 0, 00:16:47.706 "state": "online", 00:16:47.706 "raid_level": "raid1", 00:16:47.706 "superblock": false, 00:16:47.706 "num_base_bdevs": 3, 00:16:47.706 "num_base_bdevs_discovered": 3, 00:16:47.706 "num_base_bdevs_operational": 3, 00:16:47.706 "base_bdevs_list": [ 00:16:47.706 { 00:16:47.706 "name": "BaseBdev1", 00:16:47.706 "uuid": "9c59a09a-0282-4d17-9568-8ab586f1a5bf", 00:16:47.706 "is_configured": true, 00:16:47.706 "data_offset": 0, 00:16:47.706 "data_size": 65536 00:16:47.706 }, 00:16:47.706 { 00:16:47.706 "name": "BaseBdev2", 00:16:47.706 "uuid": "ff69f906-4747-43ca-91d6-5dffbf904f1f", 00:16:47.706 "is_configured": true, 00:16:47.706 "data_offset": 0, 00:16:47.706 "data_size": 65536 00:16:47.706 }, 00:16:47.706 { 00:16:47.706 "name": "BaseBdev3", 00:16:47.706 "uuid": "4cd6ef88-109c-4d8e-9394-6c124ac9f4c2", 00:16:47.706 "is_configured": true, 00:16:47.706 "data_offset": 0, 00:16:47.706 "data_size": 65536 00:16:47.706 } 00:16:47.706 ] 00:16:47.706 }' 00:16:47.706 16:34:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.706 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:16:48.274 16:34:19 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:48.534 [2024-07-13 16:34:19.940954] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.534 16:34:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.123 16:34:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:49.123 "name": "Existed_Raid", 00:16:49.123 "uuid": "bc4d7a47-3f4f-43c2-9cc6-6c89c39b1535", 00:16:49.123 "strip_size_kb": 0, 00:16:49.123 "state": "online", 00:16:49.123 "raid_level": "raid1", 00:16:49.123 "superblock": false, 00:16:49.123 "num_base_bdevs": 3, 00:16:49.123 "num_base_bdevs_discovered": 2, 00:16:49.123 "num_base_bdevs_operational": 2, 00:16:49.123 "base_bdevs_list": [ 00:16:49.123 { 00:16:49.123 "name": null, 00:16:49.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.123 "is_configured": false, 00:16:49.123 "data_offset": 0, 00:16:49.123 "data_size": 65536 00:16:49.123 }, 00:16:49.123 { 00:16:49.123 "name": "BaseBdev2", 00:16:49.123 "uuid": "ff69f906-4747-43ca-91d6-5dffbf904f1f", 00:16:49.123 "is_configured": true, 00:16:49.123 "data_offset": 0, 00:16:49.123 "data_size": 65536 00:16:49.123 }, 00:16:49.123 { 00:16:49.123 "name": "BaseBdev3", 00:16:49.123 "uuid": "4cd6ef88-109c-4d8e-9394-6c124ac9f4c2", 00:16:49.123 "is_configured": true, 00:16:49.123 "data_offset": 0, 00:16:49.123 "data_size": 65536 00:16:49.123 } 00:16:49.123 ] 00:16:49.123 }' 00:16:49.123 16:34:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:49.123 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:16:49.691 16:34:20 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:49.691 16:34:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:49.691 16:34:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.691 16:34:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:49.691 16:34:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:49.691 16:34:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:49.691 16:34:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:49.949 [2024-07-13 16:34:21.351034] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:49.949 16:34:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:49.949 16:34:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:49.949 16:34:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.949 16:34:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:50.207 16:34:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:50.207 16:34:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:50.207 16:34:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:50.466 [2024-07-13 16:34:21.800398] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:50.466 [2024-07-13 16:34:21.800654] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.466 [2024-07-13 16:34:21.800903] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.466 [2024-07-13 16:34:21.822536] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.466 [2024-07-13 16:34:21.822810] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:16:50.466 16:34:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:50.466 16:34:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:50.466 16:34:21 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.466 16:34:21 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:50.725 16:34:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:50.725 16:34:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:50.725 16:34:22 -- bdev/bdev_raid.sh@287 -- # killprocess 127671 00:16:50.725 16:34:22 -- common/autotest_common.sh@926 -- # '[' -z 127671 ']' 00:16:50.725 16:34:22 -- common/autotest_common.sh@930 -- # kill -0 127671 00:16:50.725 16:34:22 -- common/autotest_common.sh@931 -- # uname 00:16:50.725 16:34:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:50.725 16:34:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127671 00:16:50.725 killing process with pid 127671 00:16:50.725 16:34:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:50.725 16:34:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:50.725 16:34:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127671' 00:16:50.725 16:34:22 -- common/autotest_common.sh@945 -- # kill 127671 00:16:50.725 16:34:22 -- common/autotest_common.sh@950 -- # wait 127671 00:16:50.725 [2024-07-13 16:34:22.166148] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.725 [2024-07-13 16:34:22.166254] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:51.292 ************************************ 00:16:51.292 END TEST raid_state_function_test 00:16:51.292 00:16:51.292 real 0m11.117s 00:16:51.292 user 0m19.460s 00:16:51.292 sys 0m2.065s 00:16:51.292 16:34:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.292 16:34:22 -- common/autotest_common.sh@10 -- # set +x 00:16:51.292 ************************************ 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:51.292 16:34:22 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:51.292 16:34:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:51.292 16:34:22 -- common/autotest_common.sh@10 -- # set +x 00:16:51.292 ************************************ 00:16:51.292 START TEST raid_state_function_test_sb 00:16:51.292 ************************************ 00:16:51.292 16:34:22 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@226 -- # raid_pid=128042 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128042' 00:16:51.292 Process raid pid: 128042 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128042 /var/tmp/spdk-raid.sock 00:16:51.292 16:34:22 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:51.292 16:34:22 -- common/autotest_common.sh@819 -- # '[' -z 128042 ']' 00:16:51.292 16:34:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:51.292 16:34:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.292 16:34:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:51.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:51.292 16:34:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.292 16:34:22 -- common/autotest_common.sh@10 -- # set +x 00:16:51.292 [2024-07-13 16:34:22.739387] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:51.292 [2024-07-13 16:34:22.739958] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.550 [2024-07-13 16:34:22.901937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.550 [2024-07-13 16:34:22.988632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.808 [2024-07-13 16:34:23.074073] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.374 16:34:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:52.374 16:34:23 -- common/autotest_common.sh@852 -- # return 0 00:16:52.374 16:34:23 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:52.374 [2024-07-13 16:34:23.773783] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.374 [2024-07-13 16:34:23.774170] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.374 [2024-07-13 16:34:23.774272] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.374 [2024-07-13 16:34:23.774328] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.374 [2024-07-13 16:34:23.774355] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.374 [2024-07-13 16:34:23.774426] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.374 16:34:23 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:52.374 16:34:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:52.374 16:34:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:52.375 16:34:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:52.375 16:34:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:52.375 16:34:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:52.375 16:34:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:52.375 16:34:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:52.375 16:34:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:52.375 16:34:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:52.375 16:34:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.375 16:34:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.633 16:34:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:52.633 "name": "Existed_Raid", 00:16:52.633 "uuid": "c4eab09f-62f1-462e-80a6-7d61662beadd", 00:16:52.633 "strip_size_kb": 0, 00:16:52.633 "state": "configuring", 00:16:52.633 "raid_level": "raid1", 00:16:52.633 "superblock": true, 00:16:52.633 "num_base_bdevs": 3, 00:16:52.633 "num_base_bdevs_discovered": 0, 00:16:52.633 "num_base_bdevs_operational": 3, 00:16:52.633 "base_bdevs_list": [ 00:16:52.633 { 00:16:52.633 "name": "BaseBdev1", 00:16:52.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.633 "is_configured": false, 00:16:52.633 "data_offset": 0, 00:16:52.633 "data_size": 0 00:16:52.633 }, 00:16:52.633 { 00:16:52.633 "name": "BaseBdev2", 00:16:52.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.633 "is_configured": false, 00:16:52.633 "data_offset": 0, 00:16:52.633 "data_size": 0 00:16:52.633 }, 00:16:52.633 { 00:16:52.633 "name": "BaseBdev3", 00:16:52.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.633 "is_configured": false, 00:16:52.633 "data_offset": 0, 00:16:52.633 "data_size": 0 00:16:52.633 } 00:16:52.633 ] 00:16:52.633 }' 00:16:52.633 16:34:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:52.633 16:34:24 -- common/autotest_common.sh@10 -- # set +x 00:16:53.214 16:34:24 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:53.472 [2024-07-13 16:34:24.841808] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.472 [2024-07-13 16:34:24.842072] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:53.472 16:34:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:53.731 [2024-07-13 16:34:25.029947] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.731 [2024-07-13 16:34:25.030286] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.731 [2024-07-13 16:34:25.030369] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.731 [2024-07-13 16:34:25.030428] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.731 [2024-07-13 16:34:25.030454] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.731 [2024-07-13 16:34:25.030502] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.731 16:34:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:53.989 [2024-07-13 16:34:25.250235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.989 BaseBdev1 00:16:53.989 16:34:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:53.989 16:34:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:53.989 16:34:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:53.989 16:34:25 -- common/autotest_common.sh@889 -- # local i 00:16:53.989 16:34:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:53.989 16:34:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:53.989 16:34:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.247 16:34:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:54.247 [ 00:16:54.247 { 00:16:54.247 "name": "BaseBdev1", 00:16:54.247 "aliases": [ 00:16:54.247 "d4b81c7e-61de-403f-8d64-77263599c662" 00:16:54.247 ], 00:16:54.247 "product_name": "Malloc disk", 00:16:54.247 "block_size": 512, 00:16:54.247 "num_blocks": 65536, 00:16:54.247 "uuid": "d4b81c7e-61de-403f-8d64-77263599c662", 00:16:54.247 "assigned_rate_limits": { 00:16:54.247 "rw_ios_per_sec": 0, 00:16:54.247 "rw_mbytes_per_sec": 0, 00:16:54.247 "r_mbytes_per_sec": 0, 00:16:54.247 "w_mbytes_per_sec": 0 00:16:54.247 }, 00:16:54.247 "claimed": true, 00:16:54.247 "claim_type": "exclusive_write", 00:16:54.247 "zoned": false, 00:16:54.247 "supported_io_types": { 00:16:54.247 "read": true, 00:16:54.247 "write": true, 00:16:54.247 "unmap": true, 00:16:54.247 "write_zeroes": true, 00:16:54.247 "flush": true, 00:16:54.247 "reset": true, 00:16:54.247 "compare": false, 00:16:54.247 "compare_and_write": false, 00:16:54.247 "abort": true, 00:16:54.247 "nvme_admin": false, 00:16:54.247 "nvme_io": false 00:16:54.247 }, 00:16:54.247 "memory_domains": [ 00:16:54.247 { 00:16:54.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.247 "dma_device_type": 2 00:16:54.247 } 00:16:54.247 ], 00:16:54.247 "driver_specific": {} 00:16:54.247 } 00:16:54.247 ] 00:16:54.505 16:34:25 -- common/autotest_common.sh@895 -- # return 0 00:16:54.505 16:34:25 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:54.505 16:34:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:54.505 16:34:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:54.505 16:34:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:54.505 16:34:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:54.505 16:34:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:54.505 16:34:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.505 16:34:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.505 16:34:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.505 16:34:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.505 16:34:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.505 16:34:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.505 16:34:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:54.505 "name": "Existed_Raid", 00:16:54.505 "uuid": "bc913df9-ac05-4d04-b9c0-00a638e8ec5a", 00:16:54.505 "strip_size_kb": 0, 00:16:54.505 "state": "configuring", 00:16:54.505 "raid_level": "raid1", 00:16:54.505 "superblock": true, 00:16:54.505 "num_base_bdevs": 3, 00:16:54.505 "num_base_bdevs_discovered": 1, 00:16:54.505 "num_base_bdevs_operational": 3, 00:16:54.505 "base_bdevs_list": [ 00:16:54.505 { 00:16:54.505 "name": "BaseBdev1", 00:16:54.505 "uuid": "d4b81c7e-61de-403f-8d64-77263599c662", 00:16:54.506 "is_configured": true, 00:16:54.506 "data_offset": 2048, 00:16:54.506 "data_size": 63488 00:16:54.506 }, 00:16:54.506 { 00:16:54.506 "name": "BaseBdev2", 00:16:54.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.506 "is_configured": false, 00:16:54.506 "data_offset": 0, 00:16:54.506 "data_size": 0 00:16:54.506 }, 00:16:54.506 { 00:16:54.506 "name": "BaseBdev3", 00:16:54.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.506 "is_configured": false, 00:16:54.506 "data_offset": 0, 00:16:54.506 "data_size": 0 00:16:54.506 } 00:16:54.506 ] 00:16:54.506 }' 00:16:54.506 16:34:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:54.506 16:34:25 -- common/autotest_common.sh@10 -- # set +x 00:16:55.074 16:34:26 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:55.333 [2024-07-13 16:34:26.678602] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:55.333 [2024-07-13 16:34:26.678890] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:55.333 16:34:26 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:55.333 16:34:26 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:55.592 16:34:27 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:55.852 BaseBdev1 00:16:55.852 16:34:27 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:55.852 16:34:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:55.852 16:34:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:55.852 16:34:27 -- common/autotest_common.sh@889 -- # local i 00:16:55.852 16:34:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:55.852 16:34:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:55.852 16:34:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.111 16:34:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:56.370 [ 00:16:56.370 { 00:16:56.370 "name": "BaseBdev1", 00:16:56.370 "aliases": [ 00:16:56.370 "319e32ff-4583-444a-9a12-322ae868e2a2" 00:16:56.370 ], 00:16:56.370 "product_name": "Malloc disk", 00:16:56.370 "block_size": 512, 00:16:56.370 "num_blocks": 65536, 00:16:56.370 "uuid": "319e32ff-4583-444a-9a12-322ae868e2a2", 00:16:56.370 "assigned_rate_limits": { 00:16:56.370 "rw_ios_per_sec": 0, 00:16:56.370 "rw_mbytes_per_sec": 0, 00:16:56.370 "r_mbytes_per_sec": 0, 00:16:56.370 "w_mbytes_per_sec": 0 00:16:56.370 }, 00:16:56.370 "claimed": false, 00:16:56.370 "zoned": false, 00:16:56.370 "supported_io_types": { 00:16:56.370 "read": true, 00:16:56.370 "write": true, 00:16:56.370 "unmap": true, 00:16:56.370 "write_zeroes": true, 00:16:56.370 "flush": true, 00:16:56.370 "reset": true, 00:16:56.370 "compare": false, 00:16:56.370 "compare_and_write": false, 00:16:56.370 "abort": true, 00:16:56.370 "nvme_admin": false, 00:16:56.370 "nvme_io": false 00:16:56.370 }, 00:16:56.370 "memory_domains": [ 00:16:56.370 { 00:16:56.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.370 "dma_device_type": 2 00:16:56.370 } 00:16:56.370 ], 00:16:56.370 "driver_specific": {} 00:16:56.370 } 00:16:56.370 ] 00:16:56.370 16:34:27 -- common/autotest_common.sh@895 -- # return 0 00:16:56.370 16:34:27 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:56.629 [2024-07-13 16:34:28.024475] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.629 [2024-07-13 16:34:28.031526] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.629 [2024-07-13 16:34:28.031996] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.629 [2024-07-13 16:34:28.032335] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.629 [2024-07-13 16:34:28.032714] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.629 16:34:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.888 16:34:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.888 "name": "Existed_Raid", 00:16:56.888 "uuid": "521edd99-289b-4166-9cc1-15cc4a76d924", 00:16:56.888 "strip_size_kb": 0, 00:16:56.888 "state": "configuring", 00:16:56.888 "raid_level": "raid1", 00:16:56.888 "superblock": true, 00:16:56.888 "num_base_bdevs": 3, 00:16:56.888 "num_base_bdevs_discovered": 1, 00:16:56.888 "num_base_bdevs_operational": 3, 00:16:56.888 "base_bdevs_list": [ 00:16:56.888 { 00:16:56.888 "name": "BaseBdev1", 00:16:56.888 "uuid": "319e32ff-4583-444a-9a12-322ae868e2a2", 00:16:56.888 "is_configured": true, 00:16:56.888 "data_offset": 2048, 00:16:56.888 "data_size": 63488 00:16:56.888 }, 00:16:56.888 { 00:16:56.888 "name": "BaseBdev2", 00:16:56.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.888 "is_configured": false, 00:16:56.888 "data_offset": 0, 00:16:56.888 "data_size": 0 00:16:56.888 }, 00:16:56.888 { 00:16:56.888 "name": "BaseBdev3", 00:16:56.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.888 "is_configured": false, 00:16:56.888 "data_offset": 0, 00:16:56.888 "data_size": 0 00:16:56.888 } 00:16:56.888 ] 00:16:56.888 }' 00:16:56.888 16:34:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.888 16:34:28 -- common/autotest_common.sh@10 -- # set +x 00:16:57.456 16:34:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:57.715 [2024-07-13 16:34:29.126323] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.715 BaseBdev2 00:16:57.715 16:34:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:57.715 16:34:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:57.715 16:34:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:57.715 16:34:29 -- common/autotest_common.sh@889 -- # local i 00:16:57.715 16:34:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:57.715 16:34:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:57.715 16:34:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:57.974 16:34:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:58.233 [ 00:16:58.233 { 00:16:58.233 "name": "BaseBdev2", 00:16:58.233 "aliases": [ 00:16:58.233 "f04878ba-388a-4d95-bd1b-4b82ff1eb0b8" 00:16:58.233 ], 00:16:58.233 "product_name": "Malloc disk", 00:16:58.233 "block_size": 512, 00:16:58.233 "num_blocks": 65536, 00:16:58.233 "uuid": "f04878ba-388a-4d95-bd1b-4b82ff1eb0b8", 00:16:58.233 "assigned_rate_limits": { 00:16:58.233 "rw_ios_per_sec": 0, 00:16:58.233 "rw_mbytes_per_sec": 0, 00:16:58.233 "r_mbytes_per_sec": 0, 00:16:58.233 "w_mbytes_per_sec": 0 00:16:58.233 }, 00:16:58.233 "claimed": true, 00:16:58.233 "claim_type": "exclusive_write", 00:16:58.233 "zoned": false, 00:16:58.233 "supported_io_types": { 00:16:58.233 "read": true, 00:16:58.233 "write": true, 00:16:58.233 "unmap": true, 00:16:58.233 "write_zeroes": true, 00:16:58.233 "flush": true, 00:16:58.233 "reset": true, 00:16:58.233 "compare": false, 00:16:58.233 "compare_and_write": false, 00:16:58.233 "abort": true, 00:16:58.233 "nvme_admin": false, 00:16:58.233 "nvme_io": false 00:16:58.233 }, 00:16:58.233 "memory_domains": [ 00:16:58.233 { 00:16:58.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.233 "dma_device_type": 2 00:16:58.233 } 00:16:58.233 ], 00:16:58.233 "driver_specific": {} 00:16:58.233 } 00:16:58.233 ] 00:16:58.233 16:34:29 -- common/autotest_common.sh@895 -- # return 0 00:16:58.233 16:34:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:58.233 16:34:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:58.233 16:34:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:58.233 16:34:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.233 16:34:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:58.233 16:34:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:58.234 16:34:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:58.234 16:34:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:58.234 16:34:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.234 16:34:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.234 16:34:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.234 16:34:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.234 16:34:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.234 16:34:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.492 16:34:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.492 "name": "Existed_Raid", 00:16:58.492 "uuid": "521edd99-289b-4166-9cc1-15cc4a76d924", 00:16:58.492 "strip_size_kb": 0, 00:16:58.492 "state": "configuring", 00:16:58.492 "raid_level": "raid1", 00:16:58.492 "superblock": true, 00:16:58.492 "num_base_bdevs": 3, 00:16:58.492 "num_base_bdevs_discovered": 2, 00:16:58.492 "num_base_bdevs_operational": 3, 00:16:58.492 "base_bdevs_list": [ 00:16:58.492 { 00:16:58.492 "name": "BaseBdev1", 00:16:58.492 "uuid": "319e32ff-4583-444a-9a12-322ae868e2a2", 00:16:58.492 "is_configured": true, 00:16:58.492 "data_offset": 2048, 00:16:58.492 "data_size": 63488 00:16:58.492 }, 00:16:58.492 { 00:16:58.492 "name": "BaseBdev2", 00:16:58.492 "uuid": "f04878ba-388a-4d95-bd1b-4b82ff1eb0b8", 00:16:58.492 "is_configured": true, 00:16:58.492 "data_offset": 2048, 00:16:58.492 "data_size": 63488 00:16:58.492 }, 00:16:58.492 { 00:16:58.492 "name": "BaseBdev3", 00:16:58.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.492 "is_configured": false, 00:16:58.492 "data_offset": 0, 00:16:58.492 "data_size": 0 00:16:58.492 } 00:16:58.492 ] 00:16:58.492 }' 00:16:58.492 16:34:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.492 16:34:29 -- common/autotest_common.sh@10 -- # set +x 00:16:59.059 16:34:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:59.318 [2024-07-13 16:34:30.720396] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:59.318 [2024-07-13 16:34:30.720923] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:16:59.318 [2024-07-13 16:34:30.721038] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:59.318 [2024-07-13 16:34:30.721285] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:59.318 [2024-07-13 16:34:30.721877] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:16:59.318 [2024-07-13 16:34:30.721998] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:16:59.318 [2024-07-13 16:34:30.722273] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.318 BaseBdev3 00:16:59.318 16:34:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:59.318 16:34:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:59.318 16:34:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:59.318 16:34:30 -- common/autotest_common.sh@889 -- # local i 00:16:59.318 16:34:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:59.318 16:34:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:59.318 16:34:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:59.614 16:34:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:59.871 [ 00:16:59.871 { 00:16:59.871 "name": "BaseBdev3", 00:16:59.871 "aliases": [ 00:16:59.871 "b296fd15-d9a7-4f28-93ea-eedf688b08c3" 00:16:59.871 ], 00:16:59.871 "product_name": "Malloc disk", 00:16:59.871 "block_size": 512, 00:16:59.871 "num_blocks": 65536, 00:16:59.871 "uuid": "b296fd15-d9a7-4f28-93ea-eedf688b08c3", 00:16:59.871 "assigned_rate_limits": { 00:16:59.871 "rw_ios_per_sec": 0, 00:16:59.871 "rw_mbytes_per_sec": 0, 00:16:59.871 "r_mbytes_per_sec": 0, 00:16:59.871 "w_mbytes_per_sec": 0 00:16:59.871 }, 00:16:59.871 "claimed": true, 00:16:59.871 "claim_type": "exclusive_write", 00:16:59.872 "zoned": false, 00:16:59.872 "supported_io_types": { 00:16:59.872 "read": true, 00:16:59.872 "write": true, 00:16:59.872 "unmap": true, 00:16:59.872 "write_zeroes": true, 00:16:59.872 "flush": true, 00:16:59.872 "reset": true, 00:16:59.872 "compare": false, 00:16:59.872 "compare_and_write": false, 00:16:59.872 "abort": true, 00:16:59.872 "nvme_admin": false, 00:16:59.872 "nvme_io": false 00:16:59.872 }, 00:16:59.872 "memory_domains": [ 00:16:59.872 { 00:16:59.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.872 "dma_device_type": 2 00:16:59.872 } 00:16:59.872 ], 00:16:59.872 "driver_specific": {} 00:16:59.872 } 00:16:59.872 ] 00:16:59.872 16:34:31 -- common/autotest_common.sh@895 -- # return 0 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.872 16:34:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.129 16:34:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.129 "name": "Existed_Raid", 00:17:00.129 "uuid": "521edd99-289b-4166-9cc1-15cc4a76d924", 00:17:00.129 "strip_size_kb": 0, 00:17:00.129 "state": "online", 00:17:00.129 "raid_level": "raid1", 00:17:00.129 "superblock": true, 00:17:00.129 "num_base_bdevs": 3, 00:17:00.129 "num_base_bdevs_discovered": 3, 00:17:00.129 "num_base_bdevs_operational": 3, 00:17:00.129 "base_bdevs_list": [ 00:17:00.129 { 00:17:00.129 "name": "BaseBdev1", 00:17:00.129 "uuid": "319e32ff-4583-444a-9a12-322ae868e2a2", 00:17:00.129 "is_configured": true, 00:17:00.129 "data_offset": 2048, 00:17:00.129 "data_size": 63488 00:17:00.129 }, 00:17:00.129 { 00:17:00.129 "name": "BaseBdev2", 00:17:00.129 "uuid": "f04878ba-388a-4d95-bd1b-4b82ff1eb0b8", 00:17:00.129 "is_configured": true, 00:17:00.129 "data_offset": 2048, 00:17:00.129 "data_size": 63488 00:17:00.129 }, 00:17:00.129 { 00:17:00.129 "name": "BaseBdev3", 00:17:00.129 "uuid": "b296fd15-d9a7-4f28-93ea-eedf688b08c3", 00:17:00.129 "is_configured": true, 00:17:00.129 "data_offset": 2048, 00:17:00.129 "data_size": 63488 00:17:00.129 } 00:17:00.129 ] 00:17:00.129 }' 00:17:00.129 16:34:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.130 16:34:31 -- common/autotest_common.sh@10 -- # set +x 00:17:00.696 16:34:32 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:00.955 [2024-07-13 16:34:32.280917] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.955 16:34:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.214 16:34:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.214 "name": "Existed_Raid", 00:17:01.214 "uuid": "521edd99-289b-4166-9cc1-15cc4a76d924", 00:17:01.214 "strip_size_kb": 0, 00:17:01.214 "state": "online", 00:17:01.214 "raid_level": "raid1", 00:17:01.214 "superblock": true, 00:17:01.214 "num_base_bdevs": 3, 00:17:01.214 "num_base_bdevs_discovered": 2, 00:17:01.214 "num_base_bdevs_operational": 2, 00:17:01.214 "base_bdevs_list": [ 00:17:01.214 { 00:17:01.214 "name": null, 00:17:01.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.214 "is_configured": false, 00:17:01.214 "data_offset": 2048, 00:17:01.214 "data_size": 63488 00:17:01.214 }, 00:17:01.214 { 00:17:01.214 "name": "BaseBdev2", 00:17:01.214 "uuid": "f04878ba-388a-4d95-bd1b-4b82ff1eb0b8", 00:17:01.214 "is_configured": true, 00:17:01.214 "data_offset": 2048, 00:17:01.214 "data_size": 63488 00:17:01.214 }, 00:17:01.214 { 00:17:01.214 "name": "BaseBdev3", 00:17:01.214 "uuid": "b296fd15-d9a7-4f28-93ea-eedf688b08c3", 00:17:01.214 "is_configured": true, 00:17:01.214 "data_offset": 2048, 00:17:01.214 "data_size": 63488 00:17:01.214 } 00:17:01.214 ] 00:17:01.214 }' 00:17:01.214 16:34:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.214 16:34:32 -- common/autotest_common.sh@10 -- # set +x 00:17:01.781 16:34:33 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:01.781 16:34:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:01.781 16:34:33 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:01.781 16:34:33 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.039 16:34:33 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:02.039 16:34:33 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:02.039 16:34:33 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:02.297 [2024-07-13 16:34:33.569306] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:02.297 16:34:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:02.297 16:34:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:02.297 16:34:33 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:02.297 16:34:33 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.555 16:34:33 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:02.555 16:34:33 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:02.555 16:34:33 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:02.813 [2024-07-13 16:34:34.083096] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:02.813 [2024-07-13 16:34:34.083394] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.813 [2024-07-13 16:34:34.083627] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.813 [2024-07-13 16:34:34.104914] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.813 [2024-07-13 16:34:34.105211] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:17:02.813 16:34:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:02.813 16:34:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:02.813 16:34:34 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.813 16:34:34 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:03.072 16:34:34 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:03.072 16:34:34 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:03.072 16:34:34 -- bdev/bdev_raid.sh@287 -- # killprocess 128042 00:17:03.072 16:34:34 -- common/autotest_common.sh@926 -- # '[' -z 128042 ']' 00:17:03.072 16:34:34 -- common/autotest_common.sh@930 -- # kill -0 128042 00:17:03.072 16:34:34 -- common/autotest_common.sh@931 -- # uname 00:17:03.072 16:34:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:03.072 16:34:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128042 00:17:03.072 16:34:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:03.072 16:34:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:03.072 16:34:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128042' 00:17:03.072 killing process with pid 128042 00:17:03.072 16:34:34 -- common/autotest_common.sh@945 -- # kill 128042 00:17:03.072 16:34:34 -- common/autotest_common.sh@950 -- # wait 128042 00:17:03.072 [2024-07-13 16:34:34.435930] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.072 [2024-07-13 16:34:34.436054] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:03.640 ************************************ 00:17:03.640 END TEST raid_state_function_test_sb 00:17:03.640 ************************************ 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:03.640 00:17:03.640 real 0m12.185s 00:17:03.640 user 0m21.408s 00:17:03.640 sys 0m2.202s 00:17:03.640 16:34:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:03.640 16:34:34 -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:03.640 16:34:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:03.640 16:34:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:03.640 16:34:34 -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 ************************************ 00:17:03.640 START TEST raid_superblock_test 00:17:03.640 ************************************ 00:17:03.640 16:34:34 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@357 -- # raid_pid=128430 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@358 -- # waitforlisten 128430 /var/tmp/spdk-raid.sock 00:17:03.640 16:34:34 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:03.640 16:34:34 -- common/autotest_common.sh@819 -- # '[' -z 128430 ']' 00:17:03.640 16:34:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:03.640 16:34:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:03.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:03.640 16:34:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:03.640 16:34:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:03.640 16:34:34 -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 [2024-07-13 16:34:34.981951] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:03.640 [2024-07-13 16:34:34.982276] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128430 ] 00:17:03.898 [2024-07-13 16:34:35.137672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.898 [2024-07-13 16:34:35.223056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.898 [2024-07-13 16:34:35.303035] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.831 16:34:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:04.831 16:34:35 -- common/autotest_common.sh@852 -- # return 0 00:17:04.831 16:34:35 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:04.831 16:34:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:04.831 16:34:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:04.831 16:34:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:04.831 16:34:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:04.831 16:34:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:04.831 16:34:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:04.831 16:34:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:04.831 16:34:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:04.831 malloc1 00:17:04.831 16:34:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:05.089 [2024-07-13 16:34:36.380963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:05.089 [2024-07-13 16:34:36.381117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.089 [2024-07-13 16:34:36.381170] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:17:05.089 [2024-07-13 16:34:36.381232] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.089 [2024-07-13 16:34:36.384271] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.089 [2024-07-13 16:34:36.384355] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:05.089 pt1 00:17:05.089 16:34:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:05.089 16:34:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:05.089 16:34:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:05.089 16:34:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:05.089 16:34:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:05.089 16:34:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.089 16:34:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.089 16:34:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.089 16:34:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:05.345 malloc2 00:17:05.345 16:34:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:05.345 [2024-07-13 16:34:36.796990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:05.345 [2024-07-13 16:34:36.797131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.345 [2024-07-13 16:34:36.797177] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:05.345 [2024-07-13 16:34:36.797229] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.345 [2024-07-13 16:34:36.799999] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.345 [2024-07-13 16:34:36.800057] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:05.345 pt2 00:17:05.602 16:34:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:05.602 16:34:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:05.602 16:34:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:05.602 16:34:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:05.602 16:34:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:05.602 16:34:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.602 16:34:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.602 16:34:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.602 16:34:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:05.602 malloc3 00:17:05.602 16:34:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:05.859 [2024-07-13 16:34:37.241178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:05.859 [2024-07-13 16:34:37.241305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.859 [2024-07-13 16:34:37.241353] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:05.859 [2024-07-13 16:34:37.241402] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.859 [2024-07-13 16:34:37.244189] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.859 [2024-07-13 16:34:37.244249] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:05.859 pt3 00:17:05.859 16:34:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:05.859 16:34:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:05.859 16:34:37 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:06.117 [2024-07-13 16:34:37.445432] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:06.117 [2024-07-13 16:34:37.448028] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:06.117 [2024-07-13 16:34:37.448098] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:06.117 [2024-07-13 16:34:37.448346] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:06.117 [2024-07-13 16:34:37.448357] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:06.117 [2024-07-13 16:34:37.448546] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:17:06.117 [2024-07-13 16:34:37.448961] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:06.117 [2024-07-13 16:34:37.448980] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:17:06.117 [2024-07-13 16:34:37.449218] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.117 16:34:37 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:06.117 16:34:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:06.117 16:34:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:06.117 16:34:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:06.117 16:34:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:06.117 16:34:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:06.117 16:34:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:06.117 16:34:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:06.117 16:34:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:06.117 16:34:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:06.117 16:34:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.117 16:34:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.374 16:34:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:06.374 "name": "raid_bdev1", 00:17:06.374 "uuid": "bbce2faa-3e36-4dcc-ad60-13454fb47734", 00:17:06.374 "strip_size_kb": 0, 00:17:06.374 "state": "online", 00:17:06.374 "raid_level": "raid1", 00:17:06.374 "superblock": true, 00:17:06.374 "num_base_bdevs": 3, 00:17:06.374 "num_base_bdevs_discovered": 3, 00:17:06.374 "num_base_bdevs_operational": 3, 00:17:06.374 "base_bdevs_list": [ 00:17:06.374 { 00:17:06.374 "name": "pt1", 00:17:06.374 "uuid": "beb7c2ad-9b13-521d-87c0-eabe7b63abb0", 00:17:06.374 "is_configured": true, 00:17:06.374 "data_offset": 2048, 00:17:06.374 "data_size": 63488 00:17:06.374 }, 00:17:06.374 { 00:17:06.374 "name": "pt2", 00:17:06.374 "uuid": "13a273ca-4da9-5f1e-ad9f-9378e4e4faad", 00:17:06.374 "is_configured": true, 00:17:06.374 "data_offset": 2048, 00:17:06.374 "data_size": 63488 00:17:06.374 }, 00:17:06.374 { 00:17:06.374 "name": "pt3", 00:17:06.374 "uuid": "db62ed39-f664-5a91-ab51-115e86278eb1", 00:17:06.374 "is_configured": true, 00:17:06.374 "data_offset": 2048, 00:17:06.374 "data_size": 63488 00:17:06.374 } 00:17:06.374 ] 00:17:06.374 }' 00:17:06.374 16:34:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:06.374 16:34:37 -- common/autotest_common.sh@10 -- # set +x 00:17:06.940 16:34:38 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:06.940 16:34:38 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:07.198 [2024-07-13 16:34:38.557747] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.198 16:34:38 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=bbce2faa-3e36-4dcc-ad60-13454fb47734 00:17:07.198 16:34:38 -- bdev/bdev_raid.sh@380 -- # '[' -z bbce2faa-3e36-4dcc-ad60-13454fb47734 ']' 00:17:07.198 16:34:38 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:07.457 [2024-07-13 16:34:38.829534] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.457 [2024-07-13 16:34:38.829576] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.457 [2024-07-13 16:34:38.829721] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.457 [2024-07-13 16:34:38.829829] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.457 [2024-07-13 16:34:38.829841] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:17:07.457 16:34:38 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.457 16:34:38 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:07.716 16:34:39 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:07.716 16:34:39 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:07.716 16:34:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:07.716 16:34:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:07.975 16:34:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:07.975 16:34:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:08.234 16:34:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:08.234 16:34:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:08.492 16:34:39 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:08.492 16:34:39 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:08.492 16:34:39 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:08.492 16:34:39 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:08.492 16:34:39 -- common/autotest_common.sh@640 -- # local es=0 00:17:08.492 16:34:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:08.492 16:34:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.492 16:34:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:08.492 16:34:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.493 16:34:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:08.493 16:34:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.493 16:34:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:08.493 16:34:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.493 16:34:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:08.493 16:34:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:08.751 [2024-07-13 16:34:40.149768] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:08.751 [2024-07-13 16:34:40.152186] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:08.752 [2024-07-13 16:34:40.152236] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:08.752 [2024-07-13 16:34:40.152298] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:08.752 [2024-07-13 16:34:40.152392] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:08.752 [2024-07-13 16:34:40.152422] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:08.752 [2024-07-13 16:34:40.152471] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.752 [2024-07-13 16:34:40.152481] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:17:08.752 request: 00:17:08.752 { 00:17:08.752 "name": "raid_bdev1", 00:17:08.752 "raid_level": "raid1", 00:17:08.752 "base_bdevs": [ 00:17:08.752 "malloc1", 00:17:08.752 "malloc2", 00:17:08.752 "malloc3" 00:17:08.752 ], 00:17:08.752 "superblock": false, 00:17:08.752 "method": "bdev_raid_create", 00:17:08.752 "req_id": 1 00:17:08.752 } 00:17:08.752 Got JSON-RPC error response 00:17:08.752 response: 00:17:08.752 { 00:17:08.752 "code": -17, 00:17:08.752 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:08.752 } 00:17:08.752 16:34:40 -- common/autotest_common.sh@643 -- # es=1 00:17:08.752 16:34:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:08.752 16:34:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:08.752 16:34:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:08.752 16:34:40 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.752 16:34:40 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:09.011 16:34:40 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:09.011 16:34:40 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:09.012 16:34:40 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:09.271 [2024-07-13 16:34:40.609783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:09.271 [2024-07-13 16:34:40.609876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.271 [2024-07-13 16:34:40.609922] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:09.271 [2024-07-13 16:34:40.609953] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.271 [2024-07-13 16:34:40.612795] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.271 [2024-07-13 16:34:40.612848] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:09.271 [2024-07-13 16:34:40.612960] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:09.271 [2024-07-13 16:34:40.613021] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:09.271 pt1 00:17:09.271 16:34:40 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:09.271 16:34:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:09.271 16:34:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.271 16:34:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:09.271 16:34:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:09.271 16:34:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:09.271 16:34:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.271 16:34:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.271 16:34:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.271 16:34:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.271 16:34:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.271 16:34:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.530 16:34:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.530 "name": "raid_bdev1", 00:17:09.530 "uuid": "bbce2faa-3e36-4dcc-ad60-13454fb47734", 00:17:09.530 "strip_size_kb": 0, 00:17:09.530 "state": "configuring", 00:17:09.530 "raid_level": "raid1", 00:17:09.530 "superblock": true, 00:17:09.530 "num_base_bdevs": 3, 00:17:09.530 "num_base_bdevs_discovered": 1, 00:17:09.530 "num_base_bdevs_operational": 3, 00:17:09.530 "base_bdevs_list": [ 00:17:09.530 { 00:17:09.530 "name": "pt1", 00:17:09.530 "uuid": "beb7c2ad-9b13-521d-87c0-eabe7b63abb0", 00:17:09.530 "is_configured": true, 00:17:09.530 "data_offset": 2048, 00:17:09.530 "data_size": 63488 00:17:09.530 }, 00:17:09.530 { 00:17:09.530 "name": null, 00:17:09.530 "uuid": "13a273ca-4da9-5f1e-ad9f-9378e4e4faad", 00:17:09.530 "is_configured": false, 00:17:09.530 "data_offset": 2048, 00:17:09.530 "data_size": 63488 00:17:09.530 }, 00:17:09.530 { 00:17:09.530 "name": null, 00:17:09.530 "uuid": "db62ed39-f664-5a91-ab51-115e86278eb1", 00:17:09.530 "is_configured": false, 00:17:09.530 "data_offset": 2048, 00:17:09.530 "data_size": 63488 00:17:09.530 } 00:17:09.530 ] 00:17:09.530 }' 00:17:09.530 16:34:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.530 16:34:40 -- common/autotest_common.sh@10 -- # set +x 00:17:10.096 16:34:41 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:10.096 16:34:41 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:10.354 [2024-07-13 16:34:41.653999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:10.354 [2024-07-13 16:34:41.654152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.354 [2024-07-13 16:34:41.654199] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:10.354 [2024-07-13 16:34:41.654242] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.354 [2024-07-13 16:34:41.654710] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.354 [2024-07-13 16:34:41.654757] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:10.354 [2024-07-13 16:34:41.654858] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:10.354 [2024-07-13 16:34:41.654881] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.354 pt2 00:17:10.354 16:34:41 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:10.612 [2024-07-13 16:34:41.850083] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:10.612 16:34:41 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:10.612 16:34:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:10.612 16:34:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:10.612 16:34:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:10.612 16:34:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:10.612 16:34:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:10.612 16:34:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.612 16:34:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.612 16:34:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.612 16:34:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.612 16:34:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.612 16:34:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.871 16:34:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.871 "name": "raid_bdev1", 00:17:10.871 "uuid": "bbce2faa-3e36-4dcc-ad60-13454fb47734", 00:17:10.871 "strip_size_kb": 0, 00:17:10.871 "state": "configuring", 00:17:10.871 "raid_level": "raid1", 00:17:10.871 "superblock": true, 00:17:10.871 "num_base_bdevs": 3, 00:17:10.871 "num_base_bdevs_discovered": 1, 00:17:10.871 "num_base_bdevs_operational": 3, 00:17:10.871 "base_bdevs_list": [ 00:17:10.871 { 00:17:10.871 "name": "pt1", 00:17:10.871 "uuid": "beb7c2ad-9b13-521d-87c0-eabe7b63abb0", 00:17:10.871 "is_configured": true, 00:17:10.871 "data_offset": 2048, 00:17:10.871 "data_size": 63488 00:17:10.871 }, 00:17:10.871 { 00:17:10.871 "name": null, 00:17:10.871 "uuid": "13a273ca-4da9-5f1e-ad9f-9378e4e4faad", 00:17:10.871 "is_configured": false, 00:17:10.871 "data_offset": 2048, 00:17:10.871 "data_size": 63488 00:17:10.871 }, 00:17:10.871 { 00:17:10.871 "name": null, 00:17:10.871 "uuid": "db62ed39-f664-5a91-ab51-115e86278eb1", 00:17:10.871 "is_configured": false, 00:17:10.871 "data_offset": 2048, 00:17:10.871 "data_size": 63488 00:17:10.871 } 00:17:10.871 ] 00:17:10.871 }' 00:17:10.871 16:34:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.871 16:34:42 -- common/autotest_common.sh@10 -- # set +x 00:17:11.437 16:34:42 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:11.437 16:34:42 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:11.437 16:34:42 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:11.695 [2024-07-13 16:34:43.062294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:11.695 [2024-07-13 16:34:43.062410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.695 [2024-07-13 16:34:43.062449] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:11.695 [2024-07-13 16:34:43.062482] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.695 [2024-07-13 16:34:43.062952] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.695 [2024-07-13 16:34:43.062995] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:11.695 [2024-07-13 16:34:43.063094] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:11.695 [2024-07-13 16:34:43.063116] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:11.695 pt2 00:17:11.695 16:34:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:11.695 16:34:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:11.695 16:34:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:11.954 [2024-07-13 16:34:43.262415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:11.954 [2024-07-13 16:34:43.262532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.954 [2024-07-13 16:34:43.262577] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:11.954 [2024-07-13 16:34:43.262606] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.954 [2024-07-13 16:34:43.263084] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.954 [2024-07-13 16:34:43.263127] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:11.954 [2024-07-13 16:34:43.263231] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:11.954 [2024-07-13 16:34:43.263252] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:11.954 [2024-07-13 16:34:43.263387] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:11.954 [2024-07-13 16:34:43.263396] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:11.954 [2024-07-13 16:34:43.263470] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:11.954 [2024-07-13 16:34:43.263771] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:11.954 [2024-07-13 16:34:43.263790] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:11.954 [2024-07-13 16:34:43.263893] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.954 pt3 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.954 16:34:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.213 16:34:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.213 "name": "raid_bdev1", 00:17:12.213 "uuid": "bbce2faa-3e36-4dcc-ad60-13454fb47734", 00:17:12.213 "strip_size_kb": 0, 00:17:12.213 "state": "online", 00:17:12.213 "raid_level": "raid1", 00:17:12.213 "superblock": true, 00:17:12.213 "num_base_bdevs": 3, 00:17:12.213 "num_base_bdevs_discovered": 3, 00:17:12.213 "num_base_bdevs_operational": 3, 00:17:12.213 "base_bdevs_list": [ 00:17:12.213 { 00:17:12.213 "name": "pt1", 00:17:12.213 "uuid": "beb7c2ad-9b13-521d-87c0-eabe7b63abb0", 00:17:12.213 "is_configured": true, 00:17:12.213 "data_offset": 2048, 00:17:12.213 "data_size": 63488 00:17:12.213 }, 00:17:12.213 { 00:17:12.213 "name": "pt2", 00:17:12.213 "uuid": "13a273ca-4da9-5f1e-ad9f-9378e4e4faad", 00:17:12.213 "is_configured": true, 00:17:12.213 "data_offset": 2048, 00:17:12.213 "data_size": 63488 00:17:12.213 }, 00:17:12.213 { 00:17:12.213 "name": "pt3", 00:17:12.213 "uuid": "db62ed39-f664-5a91-ab51-115e86278eb1", 00:17:12.213 "is_configured": true, 00:17:12.213 "data_offset": 2048, 00:17:12.213 "data_size": 63488 00:17:12.213 } 00:17:12.213 ] 00:17:12.213 }' 00:17:12.213 16:34:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.213 16:34:43 -- common/autotest_common.sh@10 -- # set +x 00:17:12.778 16:34:44 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:12.778 16:34:44 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:13.036 [2024-07-13 16:34:44.414796] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.036 16:34:44 -- bdev/bdev_raid.sh@430 -- # '[' bbce2faa-3e36-4dcc-ad60-13454fb47734 '!=' bbce2faa-3e36-4dcc-ad60-13454fb47734 ']' 00:17:13.036 16:34:44 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:17:13.036 16:34:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:13.036 16:34:44 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:13.036 16:34:44 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:13.294 [2024-07-13 16:34:44.702682] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:13.294 16:34:44 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.294 16:34:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:13.294 16:34:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:13.294 16:34:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:13.294 16:34:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:13.294 16:34:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:13.294 16:34:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:13.294 16:34:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:13.294 16:34:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:13.294 16:34:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:13.295 16:34:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.295 16:34:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.553 16:34:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:13.553 "name": "raid_bdev1", 00:17:13.553 "uuid": "bbce2faa-3e36-4dcc-ad60-13454fb47734", 00:17:13.553 "strip_size_kb": 0, 00:17:13.553 "state": "online", 00:17:13.553 "raid_level": "raid1", 00:17:13.553 "superblock": true, 00:17:13.553 "num_base_bdevs": 3, 00:17:13.553 "num_base_bdevs_discovered": 2, 00:17:13.553 "num_base_bdevs_operational": 2, 00:17:13.553 "base_bdevs_list": [ 00:17:13.553 { 00:17:13.553 "name": null, 00:17:13.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.553 "is_configured": false, 00:17:13.553 "data_offset": 2048, 00:17:13.553 "data_size": 63488 00:17:13.553 }, 00:17:13.553 { 00:17:13.553 "name": "pt2", 00:17:13.553 "uuid": "13a273ca-4da9-5f1e-ad9f-9378e4e4faad", 00:17:13.553 "is_configured": true, 00:17:13.553 "data_offset": 2048, 00:17:13.553 "data_size": 63488 00:17:13.553 }, 00:17:13.553 { 00:17:13.553 "name": "pt3", 00:17:13.553 "uuid": "db62ed39-f664-5a91-ab51-115e86278eb1", 00:17:13.553 "is_configured": true, 00:17:13.553 "data_offset": 2048, 00:17:13.553 "data_size": 63488 00:17:13.553 } 00:17:13.553 ] 00:17:13.553 }' 00:17:13.553 16:34:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:13.553 16:34:44 -- common/autotest_common.sh@10 -- # set +x 00:17:14.120 16:34:45 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:14.378 [2024-07-13 16:34:45.690850] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.378 [2024-07-13 16:34:45.690901] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.378 [2024-07-13 16:34:45.690994] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.378 [2024-07-13 16:34:45.691069] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.378 [2024-07-13 16:34:45.691079] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:14.378 16:34:45 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.378 16:34:45 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:17:14.637 16:34:45 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:17:14.637 16:34:45 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:17:14.637 16:34:45 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:17:14.637 16:34:45 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:14.637 16:34:45 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:14.896 16:34:46 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:14.896 16:34:46 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:14.896 16:34:46 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:15.155 16:34:46 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:15.155 16:34:46 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:15.155 16:34:46 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:17:15.155 16:34:46 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:15.155 16:34:46 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.415 [2024-07-13 16:34:46.738991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.415 [2024-07-13 16:34:46.739099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.415 [2024-07-13 16:34:46.739142] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:15.415 [2024-07-13 16:34:46.739165] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.415 [2024-07-13 16:34:46.741932] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.415 [2024-07-13 16:34:46.742002] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.415 [2024-07-13 16:34:46.742119] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:15.415 [2024-07-13 16:34:46.742168] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.415 pt2 00:17:15.415 16:34:46 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:15.415 16:34:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:15.415 16:34:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:15.415 16:34:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:15.415 16:34:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:15.415 16:34:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:15.415 16:34:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:15.415 16:34:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:15.415 16:34:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:15.415 16:34:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:15.415 16:34:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.415 16:34:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.675 16:34:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:15.675 "name": "raid_bdev1", 00:17:15.675 "uuid": "bbce2faa-3e36-4dcc-ad60-13454fb47734", 00:17:15.675 "strip_size_kb": 0, 00:17:15.675 "state": "configuring", 00:17:15.675 "raid_level": "raid1", 00:17:15.675 "superblock": true, 00:17:15.675 "num_base_bdevs": 3, 00:17:15.675 "num_base_bdevs_discovered": 1, 00:17:15.675 "num_base_bdevs_operational": 2, 00:17:15.675 "base_bdevs_list": [ 00:17:15.675 { 00:17:15.675 "name": null, 00:17:15.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.675 "is_configured": false, 00:17:15.675 "data_offset": 2048, 00:17:15.675 "data_size": 63488 00:17:15.675 }, 00:17:15.675 { 00:17:15.675 "name": "pt2", 00:17:15.675 "uuid": "13a273ca-4da9-5f1e-ad9f-9378e4e4faad", 00:17:15.675 "is_configured": true, 00:17:15.675 "data_offset": 2048, 00:17:15.675 "data_size": 63488 00:17:15.675 }, 00:17:15.675 { 00:17:15.675 "name": null, 00:17:15.675 "uuid": "db62ed39-f664-5a91-ab51-115e86278eb1", 00:17:15.675 "is_configured": false, 00:17:15.675 "data_offset": 2048, 00:17:15.675 "data_size": 63488 00:17:15.675 } 00:17:15.675 ] 00:17:15.675 }' 00:17:15.675 16:34:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:15.675 16:34:46 -- common/autotest_common.sh@10 -- # set +x 00:17:16.266 16:34:47 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:16.266 16:34:47 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:16.266 16:34:47 -- bdev/bdev_raid.sh@462 -- # i=2 00:17:16.266 16:34:47 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:16.555 [2024-07-13 16:34:47.792836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:16.555 [2024-07-13 16:34:47.792957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.555 [2024-07-13 16:34:47.793015] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:16.555 [2024-07-13 16:34:47.793040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.555 [2024-07-13 16:34:47.793559] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.555 [2024-07-13 16:34:47.793594] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:16.555 [2024-07-13 16:34:47.793707] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:16.555 [2024-07-13 16:34:47.793732] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:16.555 [2024-07-13 16:34:47.793844] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:16.555 [2024-07-13 16:34:47.793852] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:16.555 [2024-07-13 16:34:47.793918] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:17:16.556 [2024-07-13 16:34:47.794226] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:16.556 [2024-07-13 16:34:47.794237] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:16.556 [2024-07-13 16:34:47.794339] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.556 pt3 00:17:16.556 16:34:47 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:16.556 16:34:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:16.556 16:34:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:16.556 16:34:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:16.556 16:34:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:16.556 16:34:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:16.556 16:34:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:16.556 16:34:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:16.556 16:34:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:16.556 16:34:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:16.556 16:34:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.556 16:34:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.556 16:34:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:16.556 "name": "raid_bdev1", 00:17:16.556 "uuid": "bbce2faa-3e36-4dcc-ad60-13454fb47734", 00:17:16.556 "strip_size_kb": 0, 00:17:16.556 "state": "online", 00:17:16.556 "raid_level": "raid1", 00:17:16.556 "superblock": true, 00:17:16.556 "num_base_bdevs": 3, 00:17:16.556 "num_base_bdevs_discovered": 2, 00:17:16.556 "num_base_bdevs_operational": 2, 00:17:16.556 "base_bdevs_list": [ 00:17:16.556 { 00:17:16.556 "name": null, 00:17:16.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.556 "is_configured": false, 00:17:16.556 "data_offset": 2048, 00:17:16.556 "data_size": 63488 00:17:16.556 }, 00:17:16.556 { 00:17:16.556 "name": "pt2", 00:17:16.556 "uuid": "13a273ca-4da9-5f1e-ad9f-9378e4e4faad", 00:17:16.556 "is_configured": true, 00:17:16.556 "data_offset": 2048, 00:17:16.556 "data_size": 63488 00:17:16.556 }, 00:17:16.556 { 00:17:16.556 "name": "pt3", 00:17:16.556 "uuid": "db62ed39-f664-5a91-ab51-115e86278eb1", 00:17:16.556 "is_configured": true, 00:17:16.556 "data_offset": 2048, 00:17:16.556 "data_size": 63488 00:17:16.556 } 00:17:16.556 ] 00:17:16.556 }' 00:17:16.556 16:34:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:16.556 16:34:48 -- common/autotest_common.sh@10 -- # set +x 00:17:17.491 16:34:48 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:17:17.491 16:34:48 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:17.491 [2024-07-13 16:34:48.848978] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.491 [2024-07-13 16:34:48.849045] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.491 [2024-07-13 16:34:48.849142] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.491 [2024-07-13 16:34:48.849217] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.491 [2024-07-13 16:34:48.849227] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:17:17.491 16:34:48 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.491 16:34:48 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:17:17.751 16:34:49 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:17:17.751 16:34:49 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:17:17.751 16:34:49 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:18.010 [2024-07-13 16:34:49.277045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:18.010 [2024-07-13 16:34:49.277155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.010 [2024-07-13 16:34:49.277204] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:18.010 [2024-07-13 16:34:49.277227] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.010 [2024-07-13 16:34:49.280078] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.010 [2024-07-13 16:34:49.280137] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:18.010 [2024-07-13 16:34:49.280276] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:18.010 [2024-07-13 16:34:49.280322] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:18.010 pt1 00:17:18.010 16:34:49 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:18.010 16:34:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:18.010 16:34:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:18.010 16:34:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:18.010 16:34:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:18.010 16:34:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:18.011 16:34:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:18.011 16:34:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:18.011 16:34:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:18.011 16:34:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:18.011 16:34:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.011 16:34:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.270 16:34:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:18.270 "name": "raid_bdev1", 00:17:18.270 "uuid": "bbce2faa-3e36-4dcc-ad60-13454fb47734", 00:17:18.270 "strip_size_kb": 0, 00:17:18.270 "state": "configuring", 00:17:18.270 "raid_level": "raid1", 00:17:18.270 "superblock": true, 00:17:18.270 "num_base_bdevs": 3, 00:17:18.270 "num_base_bdevs_discovered": 1, 00:17:18.270 "num_base_bdevs_operational": 3, 00:17:18.270 "base_bdevs_list": [ 00:17:18.270 { 00:17:18.270 "name": "pt1", 00:17:18.270 "uuid": "beb7c2ad-9b13-521d-87c0-eabe7b63abb0", 00:17:18.270 "is_configured": true, 00:17:18.270 "data_offset": 2048, 00:17:18.270 "data_size": 63488 00:17:18.270 }, 00:17:18.270 { 00:17:18.270 "name": null, 00:17:18.270 "uuid": "13a273ca-4da9-5f1e-ad9f-9378e4e4faad", 00:17:18.270 "is_configured": false, 00:17:18.270 "data_offset": 2048, 00:17:18.270 "data_size": 63488 00:17:18.270 }, 00:17:18.270 { 00:17:18.270 "name": null, 00:17:18.270 "uuid": "db62ed39-f664-5a91-ab51-115e86278eb1", 00:17:18.270 "is_configured": false, 00:17:18.270 "data_offset": 2048, 00:17:18.270 "data_size": 63488 00:17:18.270 } 00:17:18.270 ] 00:17:18.270 }' 00:17:18.270 16:34:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:18.270 16:34:49 -- common/autotest_common.sh@10 -- # set +x 00:17:18.839 16:34:50 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:17:18.839 16:34:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:18.839 16:34:50 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:18.839 16:34:50 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:18.839 16:34:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:18.839 16:34:50 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:19.099 16:34:50 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:19.099 16:34:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:19.099 16:34:50 -- bdev/bdev_raid.sh@489 -- # i=2 00:17:19.099 16:34:50 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:19.358 [2024-07-13 16:34:50.756768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:19.358 [2024-07-13 16:34:50.756888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.358 [2024-07-13 16:34:50.756928] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:19.358 [2024-07-13 16:34:50.756958] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.358 [2024-07-13 16:34:50.757516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.358 [2024-07-13 16:34:50.757567] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:19.358 [2024-07-13 16:34:50.757685] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:19.358 [2024-07-13 16:34:50.757703] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:19.358 [2024-07-13 16:34:50.757712] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.358 [2024-07-13 16:34:50.757756] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:17:19.358 [2024-07-13 16:34:50.757827] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:19.358 pt3 00:17:19.358 16:34:50 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:19.358 16:34:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:19.358 16:34:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:19.358 16:34:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:19.358 16:34:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:19.358 16:34:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:19.358 16:34:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.358 16:34:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.358 16:34:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.358 16:34:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.358 16:34:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.358 16:34:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.617 16:34:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.617 "name": "raid_bdev1", 00:17:19.617 "uuid": "bbce2faa-3e36-4dcc-ad60-13454fb47734", 00:17:19.617 "strip_size_kb": 0, 00:17:19.617 "state": "configuring", 00:17:19.618 "raid_level": "raid1", 00:17:19.618 "superblock": true, 00:17:19.618 "num_base_bdevs": 3, 00:17:19.618 "num_base_bdevs_discovered": 1, 00:17:19.618 "num_base_bdevs_operational": 2, 00:17:19.618 "base_bdevs_list": [ 00:17:19.618 { 00:17:19.618 "name": null, 00:17:19.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.618 "is_configured": false, 00:17:19.618 "data_offset": 2048, 00:17:19.618 "data_size": 63488 00:17:19.618 }, 00:17:19.618 { 00:17:19.618 "name": null, 00:17:19.618 "uuid": "13a273ca-4da9-5f1e-ad9f-9378e4e4faad", 00:17:19.618 "is_configured": false, 00:17:19.618 "data_offset": 2048, 00:17:19.618 "data_size": 63488 00:17:19.618 }, 00:17:19.618 { 00:17:19.618 "name": "pt3", 00:17:19.618 "uuid": "db62ed39-f664-5a91-ab51-115e86278eb1", 00:17:19.618 "is_configured": true, 00:17:19.618 "data_offset": 2048, 00:17:19.618 "data_size": 63488 00:17:19.618 } 00:17:19.618 ] 00:17:19.618 }' 00:17:19.618 16:34:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.618 16:34:50 -- common/autotest_common.sh@10 -- # set +x 00:17:20.186 16:34:51 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:17:20.186 16:34:51 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:20.186 16:34:51 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.447 [2024-07-13 16:34:51.868945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.447 [2024-07-13 16:34:51.869070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.447 [2024-07-13 16:34:51.869112] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:20.447 [2024-07-13 16:34:51.869143] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.447 [2024-07-13 16:34:51.869676] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.447 [2024-07-13 16:34:51.869725] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.447 [2024-07-13 16:34:51.869827] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:20.447 [2024-07-13 16:34:51.869859] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.447 [2024-07-13 16:34:51.869988] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:17:20.447 [2024-07-13 16:34:51.869997] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:20.447 [2024-07-13 16:34:51.870079] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:17:20.447 [2024-07-13 16:34:51.870403] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:17:20.447 [2024-07-13 16:34:51.870431] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:17:20.447 [2024-07-13 16:34:51.870540] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.447 pt2 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.447 16:34:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.706 16:34:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.706 "name": "raid_bdev1", 00:17:20.706 "uuid": "bbce2faa-3e36-4dcc-ad60-13454fb47734", 00:17:20.706 "strip_size_kb": 0, 00:17:20.706 "state": "online", 00:17:20.706 "raid_level": "raid1", 00:17:20.706 "superblock": true, 00:17:20.706 "num_base_bdevs": 3, 00:17:20.706 "num_base_bdevs_discovered": 2, 00:17:20.706 "num_base_bdevs_operational": 2, 00:17:20.706 "base_bdevs_list": [ 00:17:20.706 { 00:17:20.706 "name": null, 00:17:20.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.706 "is_configured": false, 00:17:20.706 "data_offset": 2048, 00:17:20.706 "data_size": 63488 00:17:20.706 }, 00:17:20.706 { 00:17:20.706 "name": "pt2", 00:17:20.706 "uuid": "13a273ca-4da9-5f1e-ad9f-9378e4e4faad", 00:17:20.706 "is_configured": true, 00:17:20.706 "data_offset": 2048, 00:17:20.706 "data_size": 63488 00:17:20.706 }, 00:17:20.706 { 00:17:20.706 "name": "pt3", 00:17:20.706 "uuid": "db62ed39-f664-5a91-ab51-115e86278eb1", 00:17:20.706 "is_configured": true, 00:17:20.706 "data_offset": 2048, 00:17:20.706 "data_size": 63488 00:17:20.706 } 00:17:20.706 ] 00:17:20.706 }' 00:17:20.706 16:34:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.706 16:34:52 -- common/autotest_common.sh@10 -- # set +x 00:17:21.276 16:34:52 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:21.276 16:34:52 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:17:21.541 [2024-07-13 16:34:52.941387] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.541 16:34:52 -- bdev/bdev_raid.sh@506 -- # '[' bbce2faa-3e36-4dcc-ad60-13454fb47734 '!=' bbce2faa-3e36-4dcc-ad60-13454fb47734 ']' 00:17:21.541 16:34:52 -- bdev/bdev_raid.sh@511 -- # killprocess 128430 00:17:21.541 16:34:52 -- common/autotest_common.sh@926 -- # '[' -z 128430 ']' 00:17:21.541 16:34:52 -- common/autotest_common.sh@930 -- # kill -0 128430 00:17:21.541 16:34:52 -- common/autotest_common.sh@931 -- # uname 00:17:21.541 16:34:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.541 16:34:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128430 00:17:21.541 16:34:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:21.541 16:34:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:21.541 killing process with pid 128430 00:17:21.541 16:34:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128430' 00:17:21.541 16:34:52 -- common/autotest_common.sh@945 -- # kill 128430 00:17:21.541 [2024-07-13 16:34:52.997982] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.541 [2024-07-13 16:34:52.998109] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.541 [2024-07-13 16:34:52.998185] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.541 [2024-07-13 16:34:52.998201] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:17:21.541 16:34:52 -- common/autotest_common.sh@950 -- # wait 128430 00:17:21.803 [2024-07-13 16:34:53.060493] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.060 16:34:53 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:22.060 00:17:22.060 real 0m18.542s 00:17:22.060 user 0m33.378s 00:17:22.060 sys 0m3.458s 00:17:22.060 16:34:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.060 16:34:53 -- common/autotest_common.sh@10 -- # set +x 00:17:22.060 ************************************ 00:17:22.060 END TEST raid_superblock_test 00:17:22.060 ************************************ 00:17:22.060 16:34:53 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:17:22.060 16:34:53 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:22.060 16:34:53 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:22.060 16:34:53 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:22.060 16:34:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:22.060 16:34:53 -- common/autotest_common.sh@10 -- # set +x 00:17:22.317 ************************************ 00:17:22.317 START TEST raid_state_function_test 00:17:22.317 ************************************ 00:17:22.317 16:34:53 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@226 -- # raid_pid=129029 00:17:22.317 Process raid pid: 129029 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129029' 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:22.317 16:34:53 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129029 /var/tmp/spdk-raid.sock 00:17:22.317 16:34:53 -- common/autotest_common.sh@819 -- # '[' -z 129029 ']' 00:17:22.317 16:34:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:22.317 16:34:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:22.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:22.317 16:34:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:22.317 16:34:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:22.317 16:34:53 -- common/autotest_common.sh@10 -- # set +x 00:17:22.317 [2024-07-13 16:34:53.596204] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:22.317 [2024-07-13 16:34:53.596444] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.317 [2024-07-13 16:34:53.745663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.575 [2024-07-13 16:34:53.832054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.575 [2024-07-13 16:34:53.917371] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.142 16:34:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:23.142 16:34:54 -- common/autotest_common.sh@852 -- # return 0 00:17:23.142 16:34:54 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:23.402 [2024-07-13 16:34:54.756449] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.402 [2024-07-13 16:34:54.756562] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.402 [2024-07-13 16:34:54.756574] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.402 [2024-07-13 16:34:54.756594] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.402 [2024-07-13 16:34:54.756601] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:23.402 [2024-07-13 16:34:54.756649] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:23.402 [2024-07-13 16:34:54.756656] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:23.402 [2024-07-13 16:34:54.756684] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:23.402 16:34:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:23.402 16:34:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:23.402 16:34:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:23.402 16:34:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:23.402 16:34:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:23.402 16:34:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:23.403 16:34:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.403 16:34:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.403 16:34:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.403 16:34:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.403 16:34:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.403 16:34:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.661 16:34:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.661 "name": "Existed_Raid", 00:17:23.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.661 "strip_size_kb": 64, 00:17:23.661 "state": "configuring", 00:17:23.661 "raid_level": "raid0", 00:17:23.661 "superblock": false, 00:17:23.661 "num_base_bdevs": 4, 00:17:23.661 "num_base_bdevs_discovered": 0, 00:17:23.661 "num_base_bdevs_operational": 4, 00:17:23.661 "base_bdevs_list": [ 00:17:23.661 { 00:17:23.661 "name": "BaseBdev1", 00:17:23.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.662 "is_configured": false, 00:17:23.662 "data_offset": 0, 00:17:23.662 "data_size": 0 00:17:23.662 }, 00:17:23.662 { 00:17:23.662 "name": "BaseBdev2", 00:17:23.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.662 "is_configured": false, 00:17:23.662 "data_offset": 0, 00:17:23.662 "data_size": 0 00:17:23.662 }, 00:17:23.662 { 00:17:23.662 "name": "BaseBdev3", 00:17:23.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.662 "is_configured": false, 00:17:23.662 "data_offset": 0, 00:17:23.662 "data_size": 0 00:17:23.662 }, 00:17:23.662 { 00:17:23.662 "name": "BaseBdev4", 00:17:23.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.662 "is_configured": false, 00:17:23.662 "data_offset": 0, 00:17:23.662 "data_size": 0 00:17:23.662 } 00:17:23.662 ] 00:17:23.662 }' 00:17:23.662 16:34:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.662 16:34:54 -- common/autotest_common.sh@10 -- # set +x 00:17:24.229 16:34:55 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:24.490 [2024-07-13 16:34:55.724501] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.490 [2024-07-13 16:34:55.724562] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:24.490 16:34:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:24.748 [2024-07-13 16:34:55.996605] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.748 [2024-07-13 16:34:55.996699] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.748 [2024-07-13 16:34:55.996710] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.748 [2024-07-13 16:34:55.996736] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.748 [2024-07-13 16:34:55.996744] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.748 [2024-07-13 16:34:55.996762] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.748 [2024-07-13 16:34:55.996768] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:24.748 [2024-07-13 16:34:55.996794] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:24.748 16:34:56 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:25.006 [2024-07-13 16:34:56.288760] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.006 BaseBdev1 00:17:25.006 16:34:56 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:25.006 16:34:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:25.006 16:34:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:25.006 16:34:56 -- common/autotest_common.sh@889 -- # local i 00:17:25.006 16:34:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:25.006 16:34:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:25.006 16:34:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.264 16:34:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:25.522 [ 00:17:25.522 { 00:17:25.522 "name": "BaseBdev1", 00:17:25.522 "aliases": [ 00:17:25.522 "50a46ceb-091d-4b14-9a39-bffe027b4792" 00:17:25.522 ], 00:17:25.522 "product_name": "Malloc disk", 00:17:25.522 "block_size": 512, 00:17:25.522 "num_blocks": 65536, 00:17:25.522 "uuid": "50a46ceb-091d-4b14-9a39-bffe027b4792", 00:17:25.522 "assigned_rate_limits": { 00:17:25.522 "rw_ios_per_sec": 0, 00:17:25.522 "rw_mbytes_per_sec": 0, 00:17:25.522 "r_mbytes_per_sec": 0, 00:17:25.522 "w_mbytes_per_sec": 0 00:17:25.522 }, 00:17:25.522 "claimed": true, 00:17:25.522 "claim_type": "exclusive_write", 00:17:25.522 "zoned": false, 00:17:25.522 "supported_io_types": { 00:17:25.522 "read": true, 00:17:25.522 "write": true, 00:17:25.522 "unmap": true, 00:17:25.522 "write_zeroes": true, 00:17:25.522 "flush": true, 00:17:25.522 "reset": true, 00:17:25.522 "compare": false, 00:17:25.522 "compare_and_write": false, 00:17:25.522 "abort": true, 00:17:25.522 "nvme_admin": false, 00:17:25.522 "nvme_io": false 00:17:25.522 }, 00:17:25.522 "memory_domains": [ 00:17:25.522 { 00:17:25.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.522 "dma_device_type": 2 00:17:25.522 } 00:17:25.522 ], 00:17:25.522 "driver_specific": {} 00:17:25.522 } 00:17:25.522 ] 00:17:25.522 16:34:56 -- common/autotest_common.sh@895 -- # return 0 00:17:25.522 16:34:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:25.522 16:34:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.522 16:34:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:25.522 16:34:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:25.522 16:34:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:25.522 16:34:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:25.522 16:34:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.522 16:34:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.522 16:34:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.522 16:34:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.522 16:34:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.522 16:34:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.522 16:34:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.522 "name": "Existed_Raid", 00:17:25.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.522 "strip_size_kb": 64, 00:17:25.522 "state": "configuring", 00:17:25.522 "raid_level": "raid0", 00:17:25.522 "superblock": false, 00:17:25.522 "num_base_bdevs": 4, 00:17:25.522 "num_base_bdevs_discovered": 1, 00:17:25.522 "num_base_bdevs_operational": 4, 00:17:25.522 "base_bdevs_list": [ 00:17:25.522 { 00:17:25.522 "name": "BaseBdev1", 00:17:25.522 "uuid": "50a46ceb-091d-4b14-9a39-bffe027b4792", 00:17:25.522 "is_configured": true, 00:17:25.522 "data_offset": 0, 00:17:25.522 "data_size": 65536 00:17:25.522 }, 00:17:25.522 { 00:17:25.522 "name": "BaseBdev2", 00:17:25.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.522 "is_configured": false, 00:17:25.522 "data_offset": 0, 00:17:25.522 "data_size": 0 00:17:25.522 }, 00:17:25.522 { 00:17:25.522 "name": "BaseBdev3", 00:17:25.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.522 "is_configured": false, 00:17:25.522 "data_offset": 0, 00:17:25.523 "data_size": 0 00:17:25.523 }, 00:17:25.523 { 00:17:25.523 "name": "BaseBdev4", 00:17:25.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.523 "is_configured": false, 00:17:25.523 "data_offset": 0, 00:17:25.523 "data_size": 0 00:17:25.523 } 00:17:25.523 ] 00:17:25.523 }' 00:17:25.523 16:34:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.523 16:34:56 -- common/autotest_common.sh@10 -- # set +x 00:17:26.458 16:34:57 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:26.458 [2024-07-13 16:34:57.741058] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.458 [2024-07-13 16:34:57.741146] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:26.458 16:34:57 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:26.458 16:34:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:26.716 [2024-07-13 16:34:57.993230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.716 [2024-07-13 16:34:57.995703] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.716 [2024-07-13 16:34:57.995794] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.716 [2024-07-13 16:34:57.995805] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:26.716 [2024-07-13 16:34:57.995830] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:26.716 [2024-07-13 16:34:57.995838] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:26.716 [2024-07-13 16:34:57.995856] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.716 16:34:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.974 16:34:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.974 "name": "Existed_Raid", 00:17:26.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.974 "strip_size_kb": 64, 00:17:26.974 "state": "configuring", 00:17:26.974 "raid_level": "raid0", 00:17:26.974 "superblock": false, 00:17:26.974 "num_base_bdevs": 4, 00:17:26.974 "num_base_bdevs_discovered": 1, 00:17:26.974 "num_base_bdevs_operational": 4, 00:17:26.974 "base_bdevs_list": [ 00:17:26.974 { 00:17:26.974 "name": "BaseBdev1", 00:17:26.974 "uuid": "50a46ceb-091d-4b14-9a39-bffe027b4792", 00:17:26.974 "is_configured": true, 00:17:26.974 "data_offset": 0, 00:17:26.974 "data_size": 65536 00:17:26.974 }, 00:17:26.974 { 00:17:26.974 "name": "BaseBdev2", 00:17:26.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.974 "is_configured": false, 00:17:26.974 "data_offset": 0, 00:17:26.974 "data_size": 0 00:17:26.974 }, 00:17:26.974 { 00:17:26.974 "name": "BaseBdev3", 00:17:26.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.975 "is_configured": false, 00:17:26.975 "data_offset": 0, 00:17:26.975 "data_size": 0 00:17:26.975 }, 00:17:26.975 { 00:17:26.975 "name": "BaseBdev4", 00:17:26.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.975 "is_configured": false, 00:17:26.975 "data_offset": 0, 00:17:26.975 "data_size": 0 00:17:26.975 } 00:17:26.975 ] 00:17:26.975 }' 00:17:26.975 16:34:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.975 16:34:58 -- common/autotest_common.sh@10 -- # set +x 00:17:27.542 16:34:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:27.542 [2024-07-13 16:34:58.982368] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.542 BaseBdev2 00:17:27.542 16:34:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:27.542 16:34:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:27.542 16:34:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:27.542 16:34:58 -- common/autotest_common.sh@889 -- # local i 00:17:27.542 16:34:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:27.542 16:34:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:27.542 16:34:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:28.110 16:34:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.110 [ 00:17:28.110 { 00:17:28.110 "name": "BaseBdev2", 00:17:28.110 "aliases": [ 00:17:28.110 "1d42f509-236d-4b0d-b18c-65e4e0d3bc70" 00:17:28.110 ], 00:17:28.110 "product_name": "Malloc disk", 00:17:28.110 "block_size": 512, 00:17:28.110 "num_blocks": 65536, 00:17:28.110 "uuid": "1d42f509-236d-4b0d-b18c-65e4e0d3bc70", 00:17:28.110 "assigned_rate_limits": { 00:17:28.110 "rw_ios_per_sec": 0, 00:17:28.111 "rw_mbytes_per_sec": 0, 00:17:28.111 "r_mbytes_per_sec": 0, 00:17:28.111 "w_mbytes_per_sec": 0 00:17:28.111 }, 00:17:28.111 "claimed": true, 00:17:28.111 "claim_type": "exclusive_write", 00:17:28.111 "zoned": false, 00:17:28.111 "supported_io_types": { 00:17:28.111 "read": true, 00:17:28.111 "write": true, 00:17:28.111 "unmap": true, 00:17:28.111 "write_zeroes": true, 00:17:28.111 "flush": true, 00:17:28.111 "reset": true, 00:17:28.111 "compare": false, 00:17:28.111 "compare_and_write": false, 00:17:28.111 "abort": true, 00:17:28.111 "nvme_admin": false, 00:17:28.111 "nvme_io": false 00:17:28.111 }, 00:17:28.111 "memory_domains": [ 00:17:28.111 { 00:17:28.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.111 "dma_device_type": 2 00:17:28.111 } 00:17:28.111 ], 00:17:28.111 "driver_specific": {} 00:17:28.111 } 00:17:28.111 ] 00:17:28.111 16:34:59 -- common/autotest_common.sh@895 -- # return 0 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.111 16:34:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.370 16:34:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.370 "name": "Existed_Raid", 00:17:28.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.370 "strip_size_kb": 64, 00:17:28.370 "state": "configuring", 00:17:28.370 "raid_level": "raid0", 00:17:28.370 "superblock": false, 00:17:28.370 "num_base_bdevs": 4, 00:17:28.370 "num_base_bdevs_discovered": 2, 00:17:28.370 "num_base_bdevs_operational": 4, 00:17:28.370 "base_bdevs_list": [ 00:17:28.370 { 00:17:28.370 "name": "BaseBdev1", 00:17:28.370 "uuid": "50a46ceb-091d-4b14-9a39-bffe027b4792", 00:17:28.370 "is_configured": true, 00:17:28.370 "data_offset": 0, 00:17:28.370 "data_size": 65536 00:17:28.370 }, 00:17:28.370 { 00:17:28.370 "name": "BaseBdev2", 00:17:28.370 "uuid": "1d42f509-236d-4b0d-b18c-65e4e0d3bc70", 00:17:28.370 "is_configured": true, 00:17:28.370 "data_offset": 0, 00:17:28.370 "data_size": 65536 00:17:28.370 }, 00:17:28.370 { 00:17:28.370 "name": "BaseBdev3", 00:17:28.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.370 "is_configured": false, 00:17:28.370 "data_offset": 0, 00:17:28.370 "data_size": 0 00:17:28.370 }, 00:17:28.370 { 00:17:28.370 "name": "BaseBdev4", 00:17:28.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.370 "is_configured": false, 00:17:28.370 "data_offset": 0, 00:17:28.370 "data_size": 0 00:17:28.370 } 00:17:28.370 ] 00:17:28.370 }' 00:17:28.370 16:34:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.370 16:34:59 -- common/autotest_common.sh@10 -- # set +x 00:17:28.937 16:35:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:29.196 [2024-07-13 16:35:00.628180] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.196 BaseBdev3 00:17:29.196 16:35:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:29.196 16:35:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:29.196 16:35:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:29.196 16:35:00 -- common/autotest_common.sh@889 -- # local i 00:17:29.196 16:35:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:29.196 16:35:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:29.196 16:35:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:29.455 16:35:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:29.714 [ 00:17:29.714 { 00:17:29.714 "name": "BaseBdev3", 00:17:29.714 "aliases": [ 00:17:29.714 "c6a41ea6-be43-48c8-ba70-6005fbc83290" 00:17:29.714 ], 00:17:29.714 "product_name": "Malloc disk", 00:17:29.714 "block_size": 512, 00:17:29.714 "num_blocks": 65536, 00:17:29.714 "uuid": "c6a41ea6-be43-48c8-ba70-6005fbc83290", 00:17:29.714 "assigned_rate_limits": { 00:17:29.714 "rw_ios_per_sec": 0, 00:17:29.714 "rw_mbytes_per_sec": 0, 00:17:29.714 "r_mbytes_per_sec": 0, 00:17:29.714 "w_mbytes_per_sec": 0 00:17:29.714 }, 00:17:29.714 "claimed": true, 00:17:29.714 "claim_type": "exclusive_write", 00:17:29.714 "zoned": false, 00:17:29.714 "supported_io_types": { 00:17:29.714 "read": true, 00:17:29.714 "write": true, 00:17:29.714 "unmap": true, 00:17:29.714 "write_zeroes": true, 00:17:29.714 "flush": true, 00:17:29.714 "reset": true, 00:17:29.714 "compare": false, 00:17:29.714 "compare_and_write": false, 00:17:29.714 "abort": true, 00:17:29.714 "nvme_admin": false, 00:17:29.714 "nvme_io": false 00:17:29.714 }, 00:17:29.714 "memory_domains": [ 00:17:29.714 { 00:17:29.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.714 "dma_device_type": 2 00:17:29.714 } 00:17:29.714 ], 00:17:29.714 "driver_specific": {} 00:17:29.714 } 00:17:29.714 ] 00:17:29.714 16:35:01 -- common/autotest_common.sh@895 -- # return 0 00:17:29.714 16:35:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:29.714 16:35:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:29.714 16:35:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:29.714 16:35:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:29.714 16:35:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:29.714 16:35:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:29.714 16:35:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:29.714 16:35:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:29.714 16:35:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:29.714 16:35:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:29.714 16:35:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:29.714 16:35:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:29.974 16:35:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.974 16:35:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.974 16:35:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:29.974 "name": "Existed_Raid", 00:17:29.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.974 "strip_size_kb": 64, 00:17:29.974 "state": "configuring", 00:17:29.974 "raid_level": "raid0", 00:17:29.974 "superblock": false, 00:17:29.974 "num_base_bdevs": 4, 00:17:29.974 "num_base_bdevs_discovered": 3, 00:17:29.974 "num_base_bdevs_operational": 4, 00:17:29.974 "base_bdevs_list": [ 00:17:29.974 { 00:17:29.974 "name": "BaseBdev1", 00:17:29.974 "uuid": "50a46ceb-091d-4b14-9a39-bffe027b4792", 00:17:29.974 "is_configured": true, 00:17:29.974 "data_offset": 0, 00:17:29.974 "data_size": 65536 00:17:29.974 }, 00:17:29.974 { 00:17:29.974 "name": "BaseBdev2", 00:17:29.974 "uuid": "1d42f509-236d-4b0d-b18c-65e4e0d3bc70", 00:17:29.974 "is_configured": true, 00:17:29.974 "data_offset": 0, 00:17:29.974 "data_size": 65536 00:17:29.974 }, 00:17:29.974 { 00:17:29.974 "name": "BaseBdev3", 00:17:29.974 "uuid": "c6a41ea6-be43-48c8-ba70-6005fbc83290", 00:17:29.974 "is_configured": true, 00:17:29.974 "data_offset": 0, 00:17:29.974 "data_size": 65536 00:17:29.974 }, 00:17:29.974 { 00:17:29.974 "name": "BaseBdev4", 00:17:29.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.974 "is_configured": false, 00:17:29.974 "data_offset": 0, 00:17:29.974 "data_size": 0 00:17:29.974 } 00:17:29.974 ] 00:17:29.974 }' 00:17:29.974 16:35:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:29.974 16:35:01 -- common/autotest_common.sh@10 -- # set +x 00:17:30.543 16:35:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:30.803 [2024-07-13 16:35:02.238213] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:30.803 [2024-07-13 16:35:02.238547] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:17:30.803 [2024-07-13 16:35:02.238591] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:30.803 [2024-07-13 16:35:02.238861] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:17:30.803 [2024-07-13 16:35:02.239439] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:17:30.803 [2024-07-13 16:35:02.239552] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:17:30.803 [2024-07-13 16:35:02.239883] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.803 BaseBdev4 00:17:30.803 16:35:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:30.803 16:35:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:30.803 16:35:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:30.803 16:35:02 -- common/autotest_common.sh@889 -- # local i 00:17:30.803 16:35:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:30.803 16:35:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:30.803 16:35:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:31.062 16:35:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:31.321 [ 00:17:31.321 { 00:17:31.321 "name": "BaseBdev4", 00:17:31.321 "aliases": [ 00:17:31.321 "f2c846e2-fcc8-4c65-b69c-46ed4787e9aa" 00:17:31.321 ], 00:17:31.321 "product_name": "Malloc disk", 00:17:31.321 "block_size": 512, 00:17:31.321 "num_blocks": 65536, 00:17:31.321 "uuid": "f2c846e2-fcc8-4c65-b69c-46ed4787e9aa", 00:17:31.321 "assigned_rate_limits": { 00:17:31.321 "rw_ios_per_sec": 0, 00:17:31.321 "rw_mbytes_per_sec": 0, 00:17:31.321 "r_mbytes_per_sec": 0, 00:17:31.321 "w_mbytes_per_sec": 0 00:17:31.321 }, 00:17:31.321 "claimed": true, 00:17:31.321 "claim_type": "exclusive_write", 00:17:31.321 "zoned": false, 00:17:31.321 "supported_io_types": { 00:17:31.321 "read": true, 00:17:31.321 "write": true, 00:17:31.321 "unmap": true, 00:17:31.321 "write_zeroes": true, 00:17:31.321 "flush": true, 00:17:31.321 "reset": true, 00:17:31.321 "compare": false, 00:17:31.321 "compare_and_write": false, 00:17:31.321 "abort": true, 00:17:31.321 "nvme_admin": false, 00:17:31.321 "nvme_io": false 00:17:31.321 }, 00:17:31.321 "memory_domains": [ 00:17:31.321 { 00:17:31.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.321 "dma_device_type": 2 00:17:31.321 } 00:17:31.321 ], 00:17:31.321 "driver_specific": {} 00:17:31.321 } 00:17:31.321 ] 00:17:31.321 16:35:02 -- common/autotest_common.sh@895 -- # return 0 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.321 16:35:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.579 16:35:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:31.579 "name": "Existed_Raid", 00:17:31.579 "uuid": "1c760a23-584d-4299-8672-76cd0991c7be", 00:17:31.579 "strip_size_kb": 64, 00:17:31.579 "state": "online", 00:17:31.579 "raid_level": "raid0", 00:17:31.579 "superblock": false, 00:17:31.579 "num_base_bdevs": 4, 00:17:31.579 "num_base_bdevs_discovered": 4, 00:17:31.579 "num_base_bdevs_operational": 4, 00:17:31.579 "base_bdevs_list": [ 00:17:31.579 { 00:17:31.579 "name": "BaseBdev1", 00:17:31.579 "uuid": "50a46ceb-091d-4b14-9a39-bffe027b4792", 00:17:31.579 "is_configured": true, 00:17:31.579 "data_offset": 0, 00:17:31.579 "data_size": 65536 00:17:31.579 }, 00:17:31.579 { 00:17:31.579 "name": "BaseBdev2", 00:17:31.579 "uuid": "1d42f509-236d-4b0d-b18c-65e4e0d3bc70", 00:17:31.579 "is_configured": true, 00:17:31.579 "data_offset": 0, 00:17:31.579 "data_size": 65536 00:17:31.579 }, 00:17:31.579 { 00:17:31.579 "name": "BaseBdev3", 00:17:31.579 "uuid": "c6a41ea6-be43-48c8-ba70-6005fbc83290", 00:17:31.579 "is_configured": true, 00:17:31.579 "data_offset": 0, 00:17:31.579 "data_size": 65536 00:17:31.579 }, 00:17:31.579 { 00:17:31.579 "name": "BaseBdev4", 00:17:31.579 "uuid": "f2c846e2-fcc8-4c65-b69c-46ed4787e9aa", 00:17:31.579 "is_configured": true, 00:17:31.579 "data_offset": 0, 00:17:31.579 "data_size": 65536 00:17:31.579 } 00:17:31.579 ] 00:17:31.579 }' 00:17:31.579 16:35:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:31.579 16:35:02 -- common/autotest_common.sh@10 -- # set +x 00:17:32.145 16:35:03 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:32.404 [2024-07-13 16:35:03.634690] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.404 [2024-07-13 16:35:03.634985] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.404 [2024-07-13 16:35:03.635167] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.404 16:35:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.662 16:35:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.662 "name": "Existed_Raid", 00:17:32.662 "uuid": "1c760a23-584d-4299-8672-76cd0991c7be", 00:17:32.662 "strip_size_kb": 64, 00:17:32.662 "state": "offline", 00:17:32.662 "raid_level": "raid0", 00:17:32.662 "superblock": false, 00:17:32.662 "num_base_bdevs": 4, 00:17:32.662 "num_base_bdevs_discovered": 3, 00:17:32.662 "num_base_bdevs_operational": 3, 00:17:32.662 "base_bdevs_list": [ 00:17:32.662 { 00:17:32.662 "name": null, 00:17:32.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.662 "is_configured": false, 00:17:32.662 "data_offset": 0, 00:17:32.662 "data_size": 65536 00:17:32.662 }, 00:17:32.662 { 00:17:32.662 "name": "BaseBdev2", 00:17:32.662 "uuid": "1d42f509-236d-4b0d-b18c-65e4e0d3bc70", 00:17:32.662 "is_configured": true, 00:17:32.662 "data_offset": 0, 00:17:32.662 "data_size": 65536 00:17:32.662 }, 00:17:32.662 { 00:17:32.662 "name": "BaseBdev3", 00:17:32.662 "uuid": "c6a41ea6-be43-48c8-ba70-6005fbc83290", 00:17:32.662 "is_configured": true, 00:17:32.662 "data_offset": 0, 00:17:32.662 "data_size": 65536 00:17:32.662 }, 00:17:32.662 { 00:17:32.662 "name": "BaseBdev4", 00:17:32.662 "uuid": "f2c846e2-fcc8-4c65-b69c-46ed4787e9aa", 00:17:32.662 "is_configured": true, 00:17:32.662 "data_offset": 0, 00:17:32.662 "data_size": 65536 00:17:32.662 } 00:17:32.662 ] 00:17:32.662 }' 00:17:32.662 16:35:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.662 16:35:03 -- common/autotest_common.sh@10 -- # set +x 00:17:33.230 16:35:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:33.230 16:35:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:33.230 16:35:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.230 16:35:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:33.489 16:35:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:33.489 16:35:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.489 16:35:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:33.748 [2024-07-13 16:35:05.024801] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.748 16:35:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:33.748 16:35:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:33.748 16:35:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:33.748 16:35:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.019 16:35:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:34.019 16:35:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.019 16:35:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:34.019 [2024-07-13 16:35:05.426591] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:34.019 16:35:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:34.019 16:35:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:34.019 16:35:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.019 16:35:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:34.307 16:35:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:34.307 16:35:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.307 16:35:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:34.577 [2024-07-13 16:35:05.820045] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:34.577 [2024-07-13 16:35:05.820492] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:17:34.577 16:35:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:34.577 16:35:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:34.577 16:35:05 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:34.577 16:35:05 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.835 16:35:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:34.835 16:35:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:34.835 16:35:06 -- bdev/bdev_raid.sh@287 -- # killprocess 129029 00:17:34.835 16:35:06 -- common/autotest_common.sh@926 -- # '[' -z 129029 ']' 00:17:34.835 16:35:06 -- common/autotest_common.sh@930 -- # kill -0 129029 00:17:34.835 16:35:06 -- common/autotest_common.sh@931 -- # uname 00:17:34.835 16:35:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:34.835 16:35:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129029 00:17:34.835 16:35:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:34.835 16:35:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:34.835 16:35:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129029' 00:17:34.835 killing process with pid 129029 00:17:34.835 16:35:06 -- common/autotest_common.sh@945 -- # kill 129029 00:17:34.835 16:35:06 -- common/autotest_common.sh@950 -- # wait 129029 00:17:34.835 [2024-07-13 16:35:06.142036] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.835 [2024-07-13 16:35:06.142332] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.095 16:35:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:35.095 00:17:35.095 real 0m13.008s 00:17:35.095 user 0m22.682s 00:17:35.095 sys 0m2.655s 00:17:35.095 16:35:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.095 16:35:06 -- common/autotest_common.sh@10 -- # set +x 00:17:35.095 ************************************ 00:17:35.095 END TEST raid_state_function_test 00:17:35.095 ************************************ 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:17:35.355 16:35:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:35.355 16:35:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:35.355 16:35:06 -- common/autotest_common.sh@10 -- # set +x 00:17:35.355 ************************************ 00:17:35.355 START TEST raid_state_function_test_sb 00:17:35.355 ************************************ 00:17:35.355 16:35:06 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=129454 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129454' 00:17:35.355 Process raid pid: 129454 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129454 /var/tmp/spdk-raid.sock 00:17:35.355 16:35:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:35.355 16:35:06 -- common/autotest_common.sh@819 -- # '[' -z 129454 ']' 00:17:35.355 16:35:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:35.355 16:35:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:35.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:35.355 16:35:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:35.355 16:35:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:35.355 16:35:06 -- common/autotest_common.sh@10 -- # set +x 00:17:35.355 [2024-07-13 16:35:06.686132] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:35.355 [2024-07-13 16:35:06.686563] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.614 [2024-07-13 16:35:06.832592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.614 [2024-07-13 16:35:06.914214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.614 [2024-07-13 16:35:06.993073] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.183 16:35:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:36.183 16:35:07 -- common/autotest_common.sh@852 -- # return 0 00:17:36.183 16:35:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:36.442 [2024-07-13 16:35:07.746464] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:36.442 [2024-07-13 16:35:07.746838] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:36.442 [2024-07-13 16:35:07.746930] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:36.442 [2024-07-13 16:35:07.746982] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:36.442 [2024-07-13 16:35:07.747008] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:36.442 [2024-07-13 16:35:07.747081] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:36.442 [2024-07-13 16:35:07.747325] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:36.442 [2024-07-13 16:35:07.747388] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:36.442 16:35:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:36.442 16:35:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:36.442 16:35:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:36.442 16:35:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:36.442 16:35:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:36.442 16:35:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:36.442 16:35:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.442 16:35:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.442 16:35:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.442 16:35:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.442 16:35:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.442 16:35:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.701 16:35:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.701 "name": "Existed_Raid", 00:17:36.701 "uuid": "598ea02b-3ac9-4ef7-8320-18afb405e7df", 00:17:36.701 "strip_size_kb": 64, 00:17:36.701 "state": "configuring", 00:17:36.701 "raid_level": "raid0", 00:17:36.701 "superblock": true, 00:17:36.701 "num_base_bdevs": 4, 00:17:36.701 "num_base_bdevs_discovered": 0, 00:17:36.701 "num_base_bdevs_operational": 4, 00:17:36.701 "base_bdevs_list": [ 00:17:36.701 { 00:17:36.701 "name": "BaseBdev1", 00:17:36.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.701 "is_configured": false, 00:17:36.701 "data_offset": 0, 00:17:36.701 "data_size": 0 00:17:36.701 }, 00:17:36.701 { 00:17:36.701 "name": "BaseBdev2", 00:17:36.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.701 "is_configured": false, 00:17:36.701 "data_offset": 0, 00:17:36.701 "data_size": 0 00:17:36.701 }, 00:17:36.701 { 00:17:36.701 "name": "BaseBdev3", 00:17:36.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.701 "is_configured": false, 00:17:36.701 "data_offset": 0, 00:17:36.701 "data_size": 0 00:17:36.701 }, 00:17:36.701 { 00:17:36.701 "name": "BaseBdev4", 00:17:36.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.701 "is_configured": false, 00:17:36.701 "data_offset": 0, 00:17:36.701 "data_size": 0 00:17:36.701 } 00:17:36.701 ] 00:17:36.701 }' 00:17:36.701 16:35:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.701 16:35:07 -- common/autotest_common.sh@10 -- # set +x 00:17:37.268 16:35:08 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:37.527 [2024-07-13 16:35:08.802439] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:37.527 [2024-07-13 16:35:08.802758] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:37.527 16:35:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:37.786 [2024-07-13 16:35:09.002572] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.786 [2024-07-13 16:35:09.002831] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.786 [2024-07-13 16:35:09.002909] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.786 [2024-07-13 16:35:09.002968] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.786 [2024-07-13 16:35:09.002995] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:37.786 [2024-07-13 16:35:09.003032] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:37.786 [2024-07-13 16:35:09.003056] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:37.786 [2024-07-13 16:35:09.003154] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:37.786 16:35:09 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:38.045 [2024-07-13 16:35:09.282736] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.045 BaseBdev1 00:17:38.045 16:35:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:38.045 16:35:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:38.045 16:35:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:38.045 16:35:09 -- common/autotest_common.sh@889 -- # local i 00:17:38.045 16:35:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:38.045 16:35:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:38.045 16:35:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:38.303 16:35:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:38.303 [ 00:17:38.303 { 00:17:38.303 "name": "BaseBdev1", 00:17:38.303 "aliases": [ 00:17:38.303 "37a205de-4d3a-4e4e-93ac-42a24b9dd6b8" 00:17:38.303 ], 00:17:38.303 "product_name": "Malloc disk", 00:17:38.303 "block_size": 512, 00:17:38.303 "num_blocks": 65536, 00:17:38.303 "uuid": "37a205de-4d3a-4e4e-93ac-42a24b9dd6b8", 00:17:38.303 "assigned_rate_limits": { 00:17:38.303 "rw_ios_per_sec": 0, 00:17:38.303 "rw_mbytes_per_sec": 0, 00:17:38.303 "r_mbytes_per_sec": 0, 00:17:38.303 "w_mbytes_per_sec": 0 00:17:38.303 }, 00:17:38.303 "claimed": true, 00:17:38.303 "claim_type": "exclusive_write", 00:17:38.303 "zoned": false, 00:17:38.303 "supported_io_types": { 00:17:38.303 "read": true, 00:17:38.303 "write": true, 00:17:38.303 "unmap": true, 00:17:38.303 "write_zeroes": true, 00:17:38.303 "flush": true, 00:17:38.303 "reset": true, 00:17:38.303 "compare": false, 00:17:38.303 "compare_and_write": false, 00:17:38.304 "abort": true, 00:17:38.304 "nvme_admin": false, 00:17:38.304 "nvme_io": false 00:17:38.304 }, 00:17:38.304 "memory_domains": [ 00:17:38.304 { 00:17:38.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.304 "dma_device_type": 2 00:17:38.304 } 00:17:38.304 ], 00:17:38.304 "driver_specific": {} 00:17:38.304 } 00:17:38.304 ] 00:17:38.304 16:35:09 -- common/autotest_common.sh@895 -- # return 0 00:17:38.304 16:35:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:38.304 16:35:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:38.304 16:35:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:38.304 16:35:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:38.304 16:35:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:38.304 16:35:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:38.304 16:35:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.304 16:35:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.304 16:35:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.304 16:35:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.304 16:35:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.304 16:35:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.563 16:35:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:38.563 "name": "Existed_Raid", 00:17:38.563 "uuid": "957a80ac-ec14-4ee9-9ff3-00e03f046fcd", 00:17:38.563 "strip_size_kb": 64, 00:17:38.563 "state": "configuring", 00:17:38.563 "raid_level": "raid0", 00:17:38.563 "superblock": true, 00:17:38.563 "num_base_bdevs": 4, 00:17:38.563 "num_base_bdevs_discovered": 1, 00:17:38.563 "num_base_bdevs_operational": 4, 00:17:38.563 "base_bdevs_list": [ 00:17:38.563 { 00:17:38.563 "name": "BaseBdev1", 00:17:38.563 "uuid": "37a205de-4d3a-4e4e-93ac-42a24b9dd6b8", 00:17:38.563 "is_configured": true, 00:17:38.563 "data_offset": 2048, 00:17:38.563 "data_size": 63488 00:17:38.563 }, 00:17:38.563 { 00:17:38.563 "name": "BaseBdev2", 00:17:38.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.563 "is_configured": false, 00:17:38.563 "data_offset": 0, 00:17:38.563 "data_size": 0 00:17:38.563 }, 00:17:38.563 { 00:17:38.563 "name": "BaseBdev3", 00:17:38.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.563 "is_configured": false, 00:17:38.563 "data_offset": 0, 00:17:38.563 "data_size": 0 00:17:38.563 }, 00:17:38.563 { 00:17:38.563 "name": "BaseBdev4", 00:17:38.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.563 "is_configured": false, 00:17:38.563 "data_offset": 0, 00:17:38.563 "data_size": 0 00:17:38.563 } 00:17:38.563 ] 00:17:38.563 }' 00:17:38.563 16:35:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:38.563 16:35:09 -- common/autotest_common.sh@10 -- # set +x 00:17:39.131 16:35:10 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:39.388 [2024-07-13 16:35:10.751018] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:39.388 [2024-07-13 16:35:10.751265] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:39.388 16:35:10 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:39.388 16:35:10 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:39.647 16:35:10 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:39.905 BaseBdev1 00:17:39.905 16:35:11 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:39.905 16:35:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:39.905 16:35:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:39.905 16:35:11 -- common/autotest_common.sh@889 -- # local i 00:17:39.905 16:35:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:39.905 16:35:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:39.905 16:35:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:40.163 16:35:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:40.163 [ 00:17:40.163 { 00:17:40.163 "name": "BaseBdev1", 00:17:40.163 "aliases": [ 00:17:40.163 "79d8fde3-6a2f-4fbc-a7f6-f410246ed3f7" 00:17:40.163 ], 00:17:40.163 "product_name": "Malloc disk", 00:17:40.163 "block_size": 512, 00:17:40.163 "num_blocks": 65536, 00:17:40.163 "uuid": "79d8fde3-6a2f-4fbc-a7f6-f410246ed3f7", 00:17:40.163 "assigned_rate_limits": { 00:17:40.163 "rw_ios_per_sec": 0, 00:17:40.163 "rw_mbytes_per_sec": 0, 00:17:40.163 "r_mbytes_per_sec": 0, 00:17:40.163 "w_mbytes_per_sec": 0 00:17:40.163 }, 00:17:40.163 "claimed": false, 00:17:40.163 "zoned": false, 00:17:40.163 "supported_io_types": { 00:17:40.163 "read": true, 00:17:40.163 "write": true, 00:17:40.163 "unmap": true, 00:17:40.163 "write_zeroes": true, 00:17:40.163 "flush": true, 00:17:40.163 "reset": true, 00:17:40.163 "compare": false, 00:17:40.163 "compare_and_write": false, 00:17:40.163 "abort": true, 00:17:40.163 "nvme_admin": false, 00:17:40.163 "nvme_io": false 00:17:40.163 }, 00:17:40.163 "memory_domains": [ 00:17:40.163 { 00:17:40.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.163 "dma_device_type": 2 00:17:40.163 } 00:17:40.163 ], 00:17:40.163 "driver_specific": {} 00:17:40.163 } 00:17:40.163 ] 00:17:40.423 16:35:11 -- common/autotest_common.sh@895 -- # return 0 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:40.423 [2024-07-13 16:35:11.807952] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.423 [2024-07-13 16:35:11.810640] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.423 [2024-07-13 16:35:11.810856] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:40.423 [2024-07-13 16:35:11.810942] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:40.423 [2024-07-13 16:35:11.811071] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:40.423 [2024-07-13 16:35:11.811150] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:40.423 [2024-07-13 16:35:11.811199] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.423 16:35:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.682 16:35:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.682 "name": "Existed_Raid", 00:17:40.682 "uuid": "2807b93c-2e37-49fe-9b54-d30d2e16f046", 00:17:40.682 "strip_size_kb": 64, 00:17:40.682 "state": "configuring", 00:17:40.682 "raid_level": "raid0", 00:17:40.682 "superblock": true, 00:17:40.682 "num_base_bdevs": 4, 00:17:40.682 "num_base_bdevs_discovered": 1, 00:17:40.682 "num_base_bdevs_operational": 4, 00:17:40.682 "base_bdevs_list": [ 00:17:40.682 { 00:17:40.682 "name": "BaseBdev1", 00:17:40.682 "uuid": "79d8fde3-6a2f-4fbc-a7f6-f410246ed3f7", 00:17:40.682 "is_configured": true, 00:17:40.682 "data_offset": 2048, 00:17:40.682 "data_size": 63488 00:17:40.682 }, 00:17:40.682 { 00:17:40.682 "name": "BaseBdev2", 00:17:40.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.682 "is_configured": false, 00:17:40.682 "data_offset": 0, 00:17:40.682 "data_size": 0 00:17:40.682 }, 00:17:40.682 { 00:17:40.682 "name": "BaseBdev3", 00:17:40.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.682 "is_configured": false, 00:17:40.682 "data_offset": 0, 00:17:40.682 "data_size": 0 00:17:40.682 }, 00:17:40.682 { 00:17:40.682 "name": "BaseBdev4", 00:17:40.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.682 "is_configured": false, 00:17:40.682 "data_offset": 0, 00:17:40.682 "data_size": 0 00:17:40.682 } 00:17:40.682 ] 00:17:40.682 }' 00:17:40.682 16:35:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.682 16:35:12 -- common/autotest_common.sh@10 -- # set +x 00:17:41.250 16:35:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:41.817 [2024-07-13 16:35:12.994522] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:41.817 BaseBdev2 00:17:41.817 16:35:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:41.817 16:35:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:41.817 16:35:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:41.817 16:35:13 -- common/autotest_common.sh@889 -- # local i 00:17:41.817 16:35:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:41.817 16:35:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:41.817 16:35:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.076 16:35:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:42.335 [ 00:17:42.335 { 00:17:42.335 "name": "BaseBdev2", 00:17:42.335 "aliases": [ 00:17:42.335 "be7fbe34-8423-4cc6-b512-fc44ec7f67f3" 00:17:42.335 ], 00:17:42.335 "product_name": "Malloc disk", 00:17:42.335 "block_size": 512, 00:17:42.335 "num_blocks": 65536, 00:17:42.335 "uuid": "be7fbe34-8423-4cc6-b512-fc44ec7f67f3", 00:17:42.335 "assigned_rate_limits": { 00:17:42.335 "rw_ios_per_sec": 0, 00:17:42.335 "rw_mbytes_per_sec": 0, 00:17:42.335 "r_mbytes_per_sec": 0, 00:17:42.335 "w_mbytes_per_sec": 0 00:17:42.335 }, 00:17:42.335 "claimed": true, 00:17:42.335 "claim_type": "exclusive_write", 00:17:42.335 "zoned": false, 00:17:42.335 "supported_io_types": { 00:17:42.335 "read": true, 00:17:42.335 "write": true, 00:17:42.335 "unmap": true, 00:17:42.335 "write_zeroes": true, 00:17:42.335 "flush": true, 00:17:42.335 "reset": true, 00:17:42.335 "compare": false, 00:17:42.335 "compare_and_write": false, 00:17:42.335 "abort": true, 00:17:42.335 "nvme_admin": false, 00:17:42.335 "nvme_io": false 00:17:42.335 }, 00:17:42.335 "memory_domains": [ 00:17:42.335 { 00:17:42.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.335 "dma_device_type": 2 00:17:42.335 } 00:17:42.335 ], 00:17:42.335 "driver_specific": {} 00:17:42.335 } 00:17:42.335 ] 00:17:42.335 16:35:13 -- common/autotest_common.sh@895 -- # return 0 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.335 16:35:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.592 16:35:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.592 "name": "Existed_Raid", 00:17:42.592 "uuid": "2807b93c-2e37-49fe-9b54-d30d2e16f046", 00:17:42.592 "strip_size_kb": 64, 00:17:42.592 "state": "configuring", 00:17:42.592 "raid_level": "raid0", 00:17:42.592 "superblock": true, 00:17:42.592 "num_base_bdevs": 4, 00:17:42.592 "num_base_bdevs_discovered": 2, 00:17:42.592 "num_base_bdevs_operational": 4, 00:17:42.592 "base_bdevs_list": [ 00:17:42.592 { 00:17:42.592 "name": "BaseBdev1", 00:17:42.592 "uuid": "79d8fde3-6a2f-4fbc-a7f6-f410246ed3f7", 00:17:42.592 "is_configured": true, 00:17:42.592 "data_offset": 2048, 00:17:42.592 "data_size": 63488 00:17:42.592 }, 00:17:42.592 { 00:17:42.592 "name": "BaseBdev2", 00:17:42.592 "uuid": "be7fbe34-8423-4cc6-b512-fc44ec7f67f3", 00:17:42.592 "is_configured": true, 00:17:42.592 "data_offset": 2048, 00:17:42.592 "data_size": 63488 00:17:42.592 }, 00:17:42.592 { 00:17:42.592 "name": "BaseBdev3", 00:17:42.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.592 "is_configured": false, 00:17:42.592 "data_offset": 0, 00:17:42.592 "data_size": 0 00:17:42.592 }, 00:17:42.592 { 00:17:42.592 "name": "BaseBdev4", 00:17:42.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.592 "is_configured": false, 00:17:42.592 "data_offset": 0, 00:17:42.592 "data_size": 0 00:17:42.592 } 00:17:42.592 ] 00:17:42.592 }' 00:17:42.592 16:35:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.592 16:35:13 -- common/autotest_common.sh@10 -- # set +x 00:17:43.158 16:35:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:43.417 [2024-07-13 16:35:14.700336] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:43.417 BaseBdev3 00:17:43.417 16:35:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:43.417 16:35:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:43.417 16:35:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:43.417 16:35:14 -- common/autotest_common.sh@889 -- # local i 00:17:43.417 16:35:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:43.417 16:35:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:43.417 16:35:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:43.677 16:35:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:43.935 [ 00:17:43.935 { 00:17:43.935 "name": "BaseBdev3", 00:17:43.935 "aliases": [ 00:17:43.935 "598a3a76-0473-4c29-a77f-e6447064b850" 00:17:43.935 ], 00:17:43.935 "product_name": "Malloc disk", 00:17:43.935 "block_size": 512, 00:17:43.935 "num_blocks": 65536, 00:17:43.935 "uuid": "598a3a76-0473-4c29-a77f-e6447064b850", 00:17:43.935 "assigned_rate_limits": { 00:17:43.935 "rw_ios_per_sec": 0, 00:17:43.935 "rw_mbytes_per_sec": 0, 00:17:43.935 "r_mbytes_per_sec": 0, 00:17:43.935 "w_mbytes_per_sec": 0 00:17:43.935 }, 00:17:43.935 "claimed": true, 00:17:43.935 "claim_type": "exclusive_write", 00:17:43.935 "zoned": false, 00:17:43.935 "supported_io_types": { 00:17:43.935 "read": true, 00:17:43.935 "write": true, 00:17:43.935 "unmap": true, 00:17:43.935 "write_zeroes": true, 00:17:43.935 "flush": true, 00:17:43.935 "reset": true, 00:17:43.935 "compare": false, 00:17:43.935 "compare_and_write": false, 00:17:43.935 "abort": true, 00:17:43.935 "nvme_admin": false, 00:17:43.935 "nvme_io": false 00:17:43.935 }, 00:17:43.935 "memory_domains": [ 00:17:43.935 { 00:17:43.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.935 "dma_device_type": 2 00:17:43.935 } 00:17:43.935 ], 00:17:43.935 "driver_specific": {} 00:17:43.935 } 00:17:43.935 ] 00:17:43.935 16:35:15 -- common/autotest_common.sh@895 -- # return 0 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.935 16:35:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.193 16:35:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.193 "name": "Existed_Raid", 00:17:44.193 "uuid": "2807b93c-2e37-49fe-9b54-d30d2e16f046", 00:17:44.193 "strip_size_kb": 64, 00:17:44.193 "state": "configuring", 00:17:44.193 "raid_level": "raid0", 00:17:44.193 "superblock": true, 00:17:44.193 "num_base_bdevs": 4, 00:17:44.193 "num_base_bdevs_discovered": 3, 00:17:44.193 "num_base_bdevs_operational": 4, 00:17:44.193 "base_bdevs_list": [ 00:17:44.193 { 00:17:44.193 "name": "BaseBdev1", 00:17:44.193 "uuid": "79d8fde3-6a2f-4fbc-a7f6-f410246ed3f7", 00:17:44.193 "is_configured": true, 00:17:44.193 "data_offset": 2048, 00:17:44.193 "data_size": 63488 00:17:44.193 }, 00:17:44.193 { 00:17:44.193 "name": "BaseBdev2", 00:17:44.193 "uuid": "be7fbe34-8423-4cc6-b512-fc44ec7f67f3", 00:17:44.193 "is_configured": true, 00:17:44.193 "data_offset": 2048, 00:17:44.193 "data_size": 63488 00:17:44.193 }, 00:17:44.193 { 00:17:44.193 "name": "BaseBdev3", 00:17:44.193 "uuid": "598a3a76-0473-4c29-a77f-e6447064b850", 00:17:44.193 "is_configured": true, 00:17:44.193 "data_offset": 2048, 00:17:44.193 "data_size": 63488 00:17:44.193 }, 00:17:44.193 { 00:17:44.193 "name": "BaseBdev4", 00:17:44.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.193 "is_configured": false, 00:17:44.193 "data_offset": 0, 00:17:44.193 "data_size": 0 00:17:44.193 } 00:17:44.193 ] 00:17:44.193 }' 00:17:44.193 16:35:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.193 16:35:15 -- common/autotest_common.sh@10 -- # set +x 00:17:44.760 16:35:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:45.019 [2024-07-13 16:35:16.310171] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:45.019 [2024-07-13 16:35:16.310724] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:17:45.019 [2024-07-13 16:35:16.310901] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:45.019 [2024-07-13 16:35:16.311112] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:17:45.019 [2024-07-13 16:35:16.311583] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:17:45.019 [2024-07-13 16:35:16.311624] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:17:45.019 [2024-07-13 16:35:16.311931] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.019 BaseBdev4 00:17:45.019 16:35:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:45.019 16:35:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:45.019 16:35:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:45.019 16:35:16 -- common/autotest_common.sh@889 -- # local i 00:17:45.019 16:35:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:45.019 16:35:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:45.019 16:35:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.278 16:35:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:45.536 [ 00:17:45.536 { 00:17:45.536 "name": "BaseBdev4", 00:17:45.536 "aliases": [ 00:17:45.536 "70a4a8a9-c23d-4a03-a180-299c93010bcc" 00:17:45.536 ], 00:17:45.536 "product_name": "Malloc disk", 00:17:45.536 "block_size": 512, 00:17:45.536 "num_blocks": 65536, 00:17:45.536 "uuid": "70a4a8a9-c23d-4a03-a180-299c93010bcc", 00:17:45.536 "assigned_rate_limits": { 00:17:45.536 "rw_ios_per_sec": 0, 00:17:45.536 "rw_mbytes_per_sec": 0, 00:17:45.536 "r_mbytes_per_sec": 0, 00:17:45.536 "w_mbytes_per_sec": 0 00:17:45.536 }, 00:17:45.536 "claimed": true, 00:17:45.536 "claim_type": "exclusive_write", 00:17:45.536 "zoned": false, 00:17:45.536 "supported_io_types": { 00:17:45.536 "read": true, 00:17:45.536 "write": true, 00:17:45.536 "unmap": true, 00:17:45.536 "write_zeroes": true, 00:17:45.536 "flush": true, 00:17:45.536 "reset": true, 00:17:45.536 "compare": false, 00:17:45.536 "compare_and_write": false, 00:17:45.536 "abort": true, 00:17:45.536 "nvme_admin": false, 00:17:45.536 "nvme_io": false 00:17:45.536 }, 00:17:45.536 "memory_domains": [ 00:17:45.536 { 00:17:45.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.536 "dma_device_type": 2 00:17:45.536 } 00:17:45.536 ], 00:17:45.536 "driver_specific": {} 00:17:45.536 } 00:17:45.536 ] 00:17:45.536 16:35:16 -- common/autotest_common.sh@895 -- # return 0 00:17:45.536 16:35:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:45.536 16:35:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:45.536 16:35:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:45.536 16:35:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:45.536 16:35:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:45.536 16:35:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:45.536 16:35:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:45.536 16:35:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:45.537 16:35:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.537 16:35:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.537 16:35:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.537 16:35:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.537 16:35:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.537 16:35:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.537 16:35:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:45.537 "name": "Existed_Raid", 00:17:45.537 "uuid": "2807b93c-2e37-49fe-9b54-d30d2e16f046", 00:17:45.537 "strip_size_kb": 64, 00:17:45.537 "state": "online", 00:17:45.537 "raid_level": "raid0", 00:17:45.537 "superblock": true, 00:17:45.537 "num_base_bdevs": 4, 00:17:45.537 "num_base_bdevs_discovered": 4, 00:17:45.537 "num_base_bdevs_operational": 4, 00:17:45.537 "base_bdevs_list": [ 00:17:45.537 { 00:17:45.537 "name": "BaseBdev1", 00:17:45.537 "uuid": "79d8fde3-6a2f-4fbc-a7f6-f410246ed3f7", 00:17:45.537 "is_configured": true, 00:17:45.537 "data_offset": 2048, 00:17:45.537 "data_size": 63488 00:17:45.537 }, 00:17:45.537 { 00:17:45.537 "name": "BaseBdev2", 00:17:45.537 "uuid": "be7fbe34-8423-4cc6-b512-fc44ec7f67f3", 00:17:45.537 "is_configured": true, 00:17:45.537 "data_offset": 2048, 00:17:45.537 "data_size": 63488 00:17:45.537 }, 00:17:45.537 { 00:17:45.537 "name": "BaseBdev3", 00:17:45.537 "uuid": "598a3a76-0473-4c29-a77f-e6447064b850", 00:17:45.537 "is_configured": true, 00:17:45.537 "data_offset": 2048, 00:17:45.537 "data_size": 63488 00:17:45.537 }, 00:17:45.537 { 00:17:45.537 "name": "BaseBdev4", 00:17:45.537 "uuid": "70a4a8a9-c23d-4a03-a180-299c93010bcc", 00:17:45.537 "is_configured": true, 00:17:45.537 "data_offset": 2048, 00:17:45.537 "data_size": 63488 00:17:45.537 } 00:17:45.537 ] 00:17:45.537 }' 00:17:45.537 16:35:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:45.537 16:35:16 -- common/autotest_common.sh@10 -- # set +x 00:17:46.102 16:35:17 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:46.359 [2024-07-13 16:35:17.769085] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:46.359 [2024-07-13 16:35:17.769402] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.359 [2024-07-13 16:35:17.769583] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.359 16:35:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.960 16:35:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.960 "name": "Existed_Raid", 00:17:46.960 "uuid": "2807b93c-2e37-49fe-9b54-d30d2e16f046", 00:17:46.960 "strip_size_kb": 64, 00:17:46.960 "state": "offline", 00:17:46.960 "raid_level": "raid0", 00:17:46.960 "superblock": true, 00:17:46.960 "num_base_bdevs": 4, 00:17:46.960 "num_base_bdevs_discovered": 3, 00:17:46.960 "num_base_bdevs_operational": 3, 00:17:46.960 "base_bdevs_list": [ 00:17:46.960 { 00:17:46.960 "name": null, 00:17:46.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.960 "is_configured": false, 00:17:46.960 "data_offset": 2048, 00:17:46.960 "data_size": 63488 00:17:46.960 }, 00:17:46.960 { 00:17:46.960 "name": "BaseBdev2", 00:17:46.960 "uuid": "be7fbe34-8423-4cc6-b512-fc44ec7f67f3", 00:17:46.960 "is_configured": true, 00:17:46.960 "data_offset": 2048, 00:17:46.960 "data_size": 63488 00:17:46.960 }, 00:17:46.960 { 00:17:46.960 "name": "BaseBdev3", 00:17:46.960 "uuid": "598a3a76-0473-4c29-a77f-e6447064b850", 00:17:46.960 "is_configured": true, 00:17:46.960 "data_offset": 2048, 00:17:46.960 "data_size": 63488 00:17:46.960 }, 00:17:46.960 { 00:17:46.960 "name": "BaseBdev4", 00:17:46.960 "uuid": "70a4a8a9-c23d-4a03-a180-299c93010bcc", 00:17:46.960 "is_configured": true, 00:17:46.960 "data_offset": 2048, 00:17:46.960 "data_size": 63488 00:17:46.960 } 00:17:46.960 ] 00:17:46.960 }' 00:17:46.960 16:35:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.960 16:35:18 -- common/autotest_common.sh@10 -- # set +x 00:17:47.527 16:35:18 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:47.527 16:35:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:47.527 16:35:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:47.527 16:35:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.527 16:35:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:47.527 16:35:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.527 16:35:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:47.786 [2024-07-13 16:35:19.228844] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:48.044 16:35:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:48.044 16:35:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:48.044 16:35:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.044 16:35:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:48.303 16:35:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:48.303 16:35:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.303 16:35:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:48.303 [2024-07-13 16:35:19.702846] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:48.303 16:35:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:48.303 16:35:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:48.303 16:35:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.303 16:35:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:48.561 16:35:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:48.561 16:35:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.561 16:35:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:48.819 [2024-07-13 16:35:20.168292] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:48.819 [2024-07-13 16:35:20.168637] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:17:48.819 16:35:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:48.819 16:35:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:48.819 16:35:20 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.819 16:35:20 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:49.077 16:35:20 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:49.077 16:35:20 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:49.077 16:35:20 -- bdev/bdev_raid.sh@287 -- # killprocess 129454 00:17:49.077 16:35:20 -- common/autotest_common.sh@926 -- # '[' -z 129454 ']' 00:17:49.077 16:35:20 -- common/autotest_common.sh@930 -- # kill -0 129454 00:17:49.077 16:35:20 -- common/autotest_common.sh@931 -- # uname 00:17:49.077 16:35:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:49.078 16:35:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129454 00:17:49.078 killing process with pid 129454 00:17:49.078 16:35:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:49.078 16:35:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:49.078 16:35:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129454' 00:17:49.078 16:35:20 -- common/autotest_common.sh@945 -- # kill 129454 00:17:49.078 16:35:20 -- common/autotest_common.sh@950 -- # wait 129454 00:17:49.078 [2024-07-13 16:35:20.516899] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:49.078 [2024-07-13 16:35:20.517015] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:49.664 16:35:20 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:49.664 00:17:49.664 real 0m14.314s 00:17:49.664 user 0m25.251s 00:17:49.664 sys 0m2.700s 00:17:49.664 16:35:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.664 16:35:20 -- common/autotest_common.sh@10 -- # set +x 00:17:49.664 ************************************ 00:17:49.664 END TEST raid_state_function_test_sb 00:17:49.664 ************************************ 00:17:49.664 16:35:20 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:49.664 16:35:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:49.664 16:35:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:49.664 16:35:20 -- common/autotest_common.sh@10 -- # set +x 00:17:49.664 ************************************ 00:17:49.664 START TEST raid_superblock_test 00:17:49.664 ************************************ 00:17:49.664 16:35:21 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:17:49.664 16:35:21 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:17:49.664 16:35:21 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:49.664 16:35:21 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:49.664 16:35:21 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:49.664 16:35:21 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:49.664 16:35:21 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:49.664 16:35:21 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:49.665 16:35:21 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:49.665 16:35:21 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:49.665 16:35:21 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:49.665 16:35:21 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:49.665 16:35:21 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:49.665 16:35:21 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:49.665 16:35:21 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:17:49.665 16:35:21 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:49.665 16:35:21 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:49.665 16:35:21 -- bdev/bdev_raid.sh@357 -- # raid_pid=129895 00:17:49.665 16:35:21 -- bdev/bdev_raid.sh@358 -- # waitforlisten 129895 /var/tmp/spdk-raid.sock 00:17:49.665 16:35:21 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:49.665 16:35:21 -- common/autotest_common.sh@819 -- # '[' -z 129895 ']' 00:17:49.665 16:35:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:49.665 16:35:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:49.665 16:35:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:49.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:49.665 16:35:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:49.665 16:35:21 -- common/autotest_common.sh@10 -- # set +x 00:17:49.665 [2024-07-13 16:35:21.084114] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:49.665 [2024-07-13 16:35:21.084733] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129895 ] 00:17:49.927 [2024-07-13 16:35:21.239660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.927 [2024-07-13 16:35:21.326022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.191 [2024-07-13 16:35:21.406413] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.759 16:35:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:50.759 16:35:22 -- common/autotest_common.sh@852 -- # return 0 00:17:50.759 16:35:22 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:50.759 16:35:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:50.759 16:35:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:50.759 16:35:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:50.759 16:35:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:50.759 16:35:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:50.759 16:35:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:50.759 16:35:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:50.759 16:35:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:51.017 malloc1 00:17:51.017 16:35:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:51.276 [2024-07-13 16:35:22.540899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:51.276 [2024-07-13 16:35:22.541298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.276 [2024-07-13 16:35:22.541384] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:17:51.276 [2024-07-13 16:35:22.541713] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.276 [2024-07-13 16:35:22.544749] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.276 [2024-07-13 16:35:22.544938] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:51.276 pt1 00:17:51.276 16:35:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:51.276 16:35:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:51.276 16:35:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:51.276 16:35:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:51.276 16:35:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:51.276 16:35:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:51.276 16:35:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:51.276 16:35:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:51.276 16:35:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:51.535 malloc2 00:17:51.535 16:35:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:51.535 [2024-07-13 16:35:22.941139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:51.535 [2024-07-13 16:35:22.941508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.535 [2024-07-13 16:35:22.941593] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:51.535 [2024-07-13 16:35:22.941723] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.535 [2024-07-13 16:35:22.944550] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.535 [2024-07-13 16:35:22.944709] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:51.535 pt2 00:17:51.535 16:35:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:51.535 16:35:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:51.535 16:35:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:51.535 16:35:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:51.535 16:35:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:51.535 16:35:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:51.535 16:35:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:51.535 16:35:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:51.535 16:35:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:51.792 malloc3 00:17:51.792 16:35:23 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:52.050 [2024-07-13 16:35:23.374211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:52.050 [2024-07-13 16:35:23.374547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.050 [2024-07-13 16:35:23.374645] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:52.050 [2024-07-13 16:35:23.374787] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.050 [2024-07-13 16:35:23.377925] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.050 [2024-07-13 16:35:23.378141] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:52.050 pt3 00:17:52.050 16:35:23 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:52.050 16:35:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:52.050 16:35:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:52.050 16:35:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:52.050 16:35:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:52.050 16:35:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:52.050 16:35:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:52.050 16:35:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:52.050 16:35:23 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:52.309 malloc4 00:17:52.309 16:35:23 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:52.568 [2024-07-13 16:35:23.868497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:52.568 [2024-07-13 16:35:23.868908] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.568 [2024-07-13 16:35:23.869001] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:52.568 [2024-07-13 16:35:23.869167] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.568 [2024-07-13 16:35:23.872520] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.568 [2024-07-13 16:35:23.872735] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:52.568 pt4 00:17:52.568 16:35:23 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:52.568 16:35:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:52.568 16:35:23 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:52.828 [2024-07-13 16:35:24.057279] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.828 [2024-07-13 16:35:24.060083] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.828 [2024-07-13 16:35:24.060318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:52.828 [2024-07-13 16:35:24.060398] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:52.828 [2024-07-13 16:35:24.060705] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:17:52.828 [2024-07-13 16:35:24.060816] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:52.828 [2024-07-13 16:35:24.061028] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:52.828 [2024-07-13 16:35:24.061564] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:17:52.828 [2024-07-13 16:35:24.061665] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:17:52.828 [2024-07-13 16:35:24.061955] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.828 16:35:24 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:52.828 16:35:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:52.828 16:35:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:52.828 16:35:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:52.828 16:35:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:52.828 16:35:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:52.828 16:35:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.828 16:35:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.828 16:35:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.828 16:35:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.828 16:35:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.828 16:35:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.087 16:35:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:53.087 "name": "raid_bdev1", 00:17:53.087 "uuid": "f6fa5f5d-d46c-4600-8d7c-4aa865a609e4", 00:17:53.087 "strip_size_kb": 64, 00:17:53.087 "state": "online", 00:17:53.087 "raid_level": "raid0", 00:17:53.087 "superblock": true, 00:17:53.087 "num_base_bdevs": 4, 00:17:53.087 "num_base_bdevs_discovered": 4, 00:17:53.087 "num_base_bdevs_operational": 4, 00:17:53.087 "base_bdevs_list": [ 00:17:53.087 { 00:17:53.087 "name": "pt1", 00:17:53.087 "uuid": "21ae7043-f326-51f5-b3dc-a4e6f4ae4777", 00:17:53.087 "is_configured": true, 00:17:53.087 "data_offset": 2048, 00:17:53.087 "data_size": 63488 00:17:53.087 }, 00:17:53.087 { 00:17:53.087 "name": "pt2", 00:17:53.087 "uuid": "88199999-9dc8-5ff3-b77e-6f38e3db9df9", 00:17:53.087 "is_configured": true, 00:17:53.087 "data_offset": 2048, 00:17:53.087 "data_size": 63488 00:17:53.087 }, 00:17:53.087 { 00:17:53.087 "name": "pt3", 00:17:53.087 "uuid": "36d715e8-1b6a-5af8-acf2-24686366aada", 00:17:53.087 "is_configured": true, 00:17:53.087 "data_offset": 2048, 00:17:53.087 "data_size": 63488 00:17:53.087 }, 00:17:53.087 { 00:17:53.087 "name": "pt4", 00:17:53.087 "uuid": "2b89fab8-1f0a-58e6-a848-1f82c619bd05", 00:17:53.087 "is_configured": true, 00:17:53.087 "data_offset": 2048, 00:17:53.087 "data_size": 63488 00:17:53.087 } 00:17:53.087 ] 00:17:53.087 }' 00:17:53.087 16:35:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:53.087 16:35:24 -- common/autotest_common.sh@10 -- # set +x 00:17:53.656 16:35:24 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:53.656 16:35:24 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:53.656 [2024-07-13 16:35:25.090390] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.656 16:35:25 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f6fa5f5d-d46c-4600-8d7c-4aa865a609e4 00:17:53.656 16:35:25 -- bdev/bdev_raid.sh@380 -- # '[' -z f6fa5f5d-d46c-4600-8d7c-4aa865a609e4 ']' 00:17:53.656 16:35:25 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:53.915 [2024-07-13 16:35:25.342189] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.915 [2024-07-13 16:35:25.342470] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.915 [2024-07-13 16:35:25.342725] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.915 [2024-07-13 16:35:25.342921] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.915 [2024-07-13 16:35:25.343009] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:17:53.915 16:35:25 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.915 16:35:25 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:54.174 16:35:25 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:54.174 16:35:25 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:54.174 16:35:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:54.174 16:35:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:54.433 16:35:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:54.433 16:35:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:54.692 16:35:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:54.692 16:35:26 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:54.951 16:35:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:54.951 16:35:26 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:55.210 16:35:26 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:55.210 16:35:26 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:55.470 16:35:26 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:55.470 16:35:26 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:55.470 16:35:26 -- common/autotest_common.sh@640 -- # local es=0 00:17:55.470 16:35:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:55.470 16:35:26 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.470 16:35:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:55.470 16:35:26 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.470 16:35:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:55.470 16:35:26 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.470 16:35:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:55.470 16:35:26 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.470 16:35:26 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:55.470 16:35:26 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:55.729 [2024-07-13 16:35:26.994452] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:55.729 [2024-07-13 16:35:26.997058] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:55.729 [2024-07-13 16:35:26.997224] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:55.729 [2024-07-13 16:35:26.997337] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:55.729 [2024-07-13 16:35:26.997423] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:55.729 [2024-07-13 16:35:26.997625] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:55.729 [2024-07-13 16:35:26.997737] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:55.729 [2024-07-13 16:35:26.997815] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:55.729 [2024-07-13 16:35:26.997882] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.729 [2024-07-13 16:35:26.997951] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:17:55.729 request: 00:17:55.729 { 00:17:55.729 "name": "raid_bdev1", 00:17:55.729 "raid_level": "raid0", 00:17:55.729 "base_bdevs": [ 00:17:55.729 "malloc1", 00:17:55.729 "malloc2", 00:17:55.729 "malloc3", 00:17:55.729 "malloc4" 00:17:55.729 ], 00:17:55.729 "superblock": false, 00:17:55.729 "strip_size_kb": 64, 00:17:55.729 "method": "bdev_raid_create", 00:17:55.729 "req_id": 1 00:17:55.729 } 00:17:55.729 Got JSON-RPC error response 00:17:55.729 response: 00:17:55.729 { 00:17:55.729 "code": -17, 00:17:55.729 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:55.729 } 00:17:55.729 16:35:27 -- common/autotest_common.sh@643 -- # es=1 00:17:55.729 16:35:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:55.729 16:35:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:55.729 16:35:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:55.729 16:35:27 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.729 16:35:27 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:55.989 [2024-07-13 16:35:27.374503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:55.989 [2024-07-13 16:35:27.374819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.989 [2024-07-13 16:35:27.374894] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:55.989 [2024-07-13 16:35:27.374997] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.989 [2024-07-13 16:35:27.378038] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.989 [2024-07-13 16:35:27.378267] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:55.989 [2024-07-13 16:35:27.378483] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:55.989 [2024-07-13 16:35:27.378666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:55.989 pt1 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.989 16:35:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.249 16:35:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:56.249 "name": "raid_bdev1", 00:17:56.249 "uuid": "f6fa5f5d-d46c-4600-8d7c-4aa865a609e4", 00:17:56.249 "strip_size_kb": 64, 00:17:56.249 "state": "configuring", 00:17:56.249 "raid_level": "raid0", 00:17:56.249 "superblock": true, 00:17:56.249 "num_base_bdevs": 4, 00:17:56.249 "num_base_bdevs_discovered": 1, 00:17:56.249 "num_base_bdevs_operational": 4, 00:17:56.249 "base_bdevs_list": [ 00:17:56.249 { 00:17:56.249 "name": "pt1", 00:17:56.249 "uuid": "21ae7043-f326-51f5-b3dc-a4e6f4ae4777", 00:17:56.249 "is_configured": true, 00:17:56.249 "data_offset": 2048, 00:17:56.249 "data_size": 63488 00:17:56.249 }, 00:17:56.249 { 00:17:56.249 "name": null, 00:17:56.249 "uuid": "88199999-9dc8-5ff3-b77e-6f38e3db9df9", 00:17:56.249 "is_configured": false, 00:17:56.249 "data_offset": 2048, 00:17:56.249 "data_size": 63488 00:17:56.249 }, 00:17:56.249 { 00:17:56.249 "name": null, 00:17:56.249 "uuid": "36d715e8-1b6a-5af8-acf2-24686366aada", 00:17:56.249 "is_configured": false, 00:17:56.249 "data_offset": 2048, 00:17:56.249 "data_size": 63488 00:17:56.249 }, 00:17:56.249 { 00:17:56.249 "name": null, 00:17:56.249 "uuid": "2b89fab8-1f0a-58e6-a848-1f82c619bd05", 00:17:56.249 "is_configured": false, 00:17:56.249 "data_offset": 2048, 00:17:56.249 "data_size": 63488 00:17:56.249 } 00:17:56.249 ] 00:17:56.249 }' 00:17:56.249 16:35:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:56.249 16:35:27 -- common/autotest_common.sh@10 -- # set +x 00:17:56.816 16:35:28 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:56.816 16:35:28 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:57.075 [2024-07-13 16:35:28.438750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:57.075 [2024-07-13 16:35:28.439125] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.075 [2024-07-13 16:35:28.439212] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:57.075 [2024-07-13 16:35:28.439313] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.075 [2024-07-13 16:35:28.439844] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.075 [2024-07-13 16:35:28.440004] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:57.075 [2024-07-13 16:35:28.440186] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:57.075 [2024-07-13 16:35:28.440303] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.075 pt2 00:17:57.075 16:35:28 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:57.334 [2024-07-13 16:35:28.686843] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:57.334 16:35:28 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:57.334 16:35:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:57.334 16:35:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:57.334 16:35:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:57.334 16:35:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:57.334 16:35:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:57.334 16:35:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.334 16:35:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.334 16:35:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.334 16:35:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.334 16:35:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.334 16:35:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.593 16:35:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.593 "name": "raid_bdev1", 00:17:57.593 "uuid": "f6fa5f5d-d46c-4600-8d7c-4aa865a609e4", 00:17:57.593 "strip_size_kb": 64, 00:17:57.593 "state": "configuring", 00:17:57.593 "raid_level": "raid0", 00:17:57.593 "superblock": true, 00:17:57.593 "num_base_bdevs": 4, 00:17:57.593 "num_base_bdevs_discovered": 1, 00:17:57.593 "num_base_bdevs_operational": 4, 00:17:57.593 "base_bdevs_list": [ 00:17:57.593 { 00:17:57.593 "name": "pt1", 00:17:57.593 "uuid": "21ae7043-f326-51f5-b3dc-a4e6f4ae4777", 00:17:57.593 "is_configured": true, 00:17:57.593 "data_offset": 2048, 00:17:57.593 "data_size": 63488 00:17:57.593 }, 00:17:57.593 { 00:17:57.593 "name": null, 00:17:57.593 "uuid": "88199999-9dc8-5ff3-b77e-6f38e3db9df9", 00:17:57.593 "is_configured": false, 00:17:57.593 "data_offset": 2048, 00:17:57.593 "data_size": 63488 00:17:57.593 }, 00:17:57.593 { 00:17:57.593 "name": null, 00:17:57.593 "uuid": "36d715e8-1b6a-5af8-acf2-24686366aada", 00:17:57.593 "is_configured": false, 00:17:57.593 "data_offset": 2048, 00:17:57.593 "data_size": 63488 00:17:57.593 }, 00:17:57.593 { 00:17:57.593 "name": null, 00:17:57.593 "uuid": "2b89fab8-1f0a-58e6-a848-1f82c619bd05", 00:17:57.593 "is_configured": false, 00:17:57.593 "data_offset": 2048, 00:17:57.593 "data_size": 63488 00:17:57.593 } 00:17:57.593 ] 00:17:57.593 }' 00:17:57.593 16:35:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.593 16:35:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.160 16:35:29 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:58.160 16:35:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:58.160 16:35:29 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.420 [2024-07-13 16:35:29.759065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.420 [2024-07-13 16:35:29.759374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.420 [2024-07-13 16:35:29.759458] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:58.420 [2024-07-13 16:35:29.759555] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.420 [2024-07-13 16:35:29.760085] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.420 [2024-07-13 16:35:29.760248] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.420 [2024-07-13 16:35:29.760441] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:58.420 [2024-07-13 16:35:29.760533] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.420 pt2 00:17:58.420 16:35:29 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:58.420 16:35:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:58.420 16:35:29 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:58.679 [2024-07-13 16:35:30.035153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:58.679 [2024-07-13 16:35:30.035482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.679 [2024-07-13 16:35:30.035553] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:58.679 [2024-07-13 16:35:30.035652] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.679 [2024-07-13 16:35:30.036213] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.679 [2024-07-13 16:35:30.036392] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:58.679 [2024-07-13 16:35:30.036558] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:58.679 [2024-07-13 16:35:30.036648] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:58.679 pt3 00:17:58.679 16:35:30 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:58.679 16:35:30 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:58.679 16:35:30 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:58.939 [2024-07-13 16:35:30.231170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:58.939 [2024-07-13 16:35:30.231483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.939 [2024-07-13 16:35:30.231555] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:58.939 [2024-07-13 16:35:30.231688] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.939 [2024-07-13 16:35:30.232186] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.939 [2024-07-13 16:35:30.232362] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:58.939 [2024-07-13 16:35:30.232534] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:58.939 [2024-07-13 16:35:30.232624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:58.939 [2024-07-13 16:35:30.232844] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:17:58.939 [2024-07-13 16:35:30.232936] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:58.939 [2024-07-13 16:35:30.233050] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:17:58.939 [2024-07-13 16:35:30.233446] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:17:58.939 [2024-07-13 16:35:30.233544] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:17:58.939 [2024-07-13 16:35:30.233708] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.939 pt4 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.939 16:35:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.198 16:35:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:59.198 "name": "raid_bdev1", 00:17:59.198 "uuid": "f6fa5f5d-d46c-4600-8d7c-4aa865a609e4", 00:17:59.198 "strip_size_kb": 64, 00:17:59.198 "state": "online", 00:17:59.198 "raid_level": "raid0", 00:17:59.198 "superblock": true, 00:17:59.198 "num_base_bdevs": 4, 00:17:59.198 "num_base_bdevs_discovered": 4, 00:17:59.198 "num_base_bdevs_operational": 4, 00:17:59.198 "base_bdevs_list": [ 00:17:59.198 { 00:17:59.198 "name": "pt1", 00:17:59.198 "uuid": "21ae7043-f326-51f5-b3dc-a4e6f4ae4777", 00:17:59.198 "is_configured": true, 00:17:59.199 "data_offset": 2048, 00:17:59.199 "data_size": 63488 00:17:59.199 }, 00:17:59.199 { 00:17:59.199 "name": "pt2", 00:17:59.199 "uuid": "88199999-9dc8-5ff3-b77e-6f38e3db9df9", 00:17:59.199 "is_configured": true, 00:17:59.199 "data_offset": 2048, 00:17:59.199 "data_size": 63488 00:17:59.199 }, 00:17:59.199 { 00:17:59.199 "name": "pt3", 00:17:59.199 "uuid": "36d715e8-1b6a-5af8-acf2-24686366aada", 00:17:59.199 "is_configured": true, 00:17:59.199 "data_offset": 2048, 00:17:59.199 "data_size": 63488 00:17:59.199 }, 00:17:59.199 { 00:17:59.199 "name": "pt4", 00:17:59.199 "uuid": "2b89fab8-1f0a-58e6-a848-1f82c619bd05", 00:17:59.199 "is_configured": true, 00:17:59.199 "data_offset": 2048, 00:17:59.199 "data_size": 63488 00:17:59.199 } 00:17:59.199 ] 00:17:59.199 }' 00:17:59.199 16:35:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:59.199 16:35:30 -- common/autotest_common.sh@10 -- # set +x 00:17:59.767 16:35:30 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:59.767 16:35:30 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:59.767 [2024-07-13 16:35:31.126456] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.767 16:35:31 -- bdev/bdev_raid.sh@430 -- # '[' f6fa5f5d-d46c-4600-8d7c-4aa865a609e4 '!=' f6fa5f5d-d46c-4600-8d7c-4aa865a609e4 ']' 00:17:59.767 16:35:31 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:17:59.768 16:35:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:59.768 16:35:31 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:59.768 16:35:31 -- bdev/bdev_raid.sh@511 -- # killprocess 129895 00:17:59.768 16:35:31 -- common/autotest_common.sh@926 -- # '[' -z 129895 ']' 00:17:59.768 16:35:31 -- common/autotest_common.sh@930 -- # kill -0 129895 00:17:59.768 16:35:31 -- common/autotest_common.sh@931 -- # uname 00:17:59.768 16:35:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:59.768 16:35:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129895 00:17:59.768 16:35:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:59.768 16:35:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:59.768 16:35:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129895' 00:17:59.768 killing process with pid 129895 00:17:59.768 16:35:31 -- common/autotest_common.sh@945 -- # kill 129895 00:17:59.768 [2024-07-13 16:35:31.178302] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:59.768 16:35:31 -- common/autotest_common.sh@950 -- # wait 129895 00:17:59.768 [2024-07-13 16:35:31.178502] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.768 [2024-07-13 16:35:31.178659] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.768 [2024-07-13 16:35:31.178766] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:00.027 [2024-07-13 16:35:31.259365] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:00.287 00:18:00.287 real 0m10.655s 00:18:00.287 user 0m18.700s 00:18:00.287 sys 0m1.782s 00:18:00.287 ************************************ 00:18:00.287 END TEST raid_superblock_test 00:18:00.287 ************************************ 00:18:00.287 16:35:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.287 16:35:31 -- common/autotest_common.sh@10 -- # set +x 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:00.287 16:35:31 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:00.287 16:35:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:00.287 16:35:31 -- common/autotest_common.sh@10 -- # set +x 00:18:00.287 ************************************ 00:18:00.287 START TEST raid_state_function_test 00:18:00.287 ************************************ 00:18:00.287 16:35:31 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=130218 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130218' 00:18:00.287 Process raid pid: 130218 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:00.287 16:35:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130218 /var/tmp/spdk-raid.sock 00:18:00.287 16:35:31 -- common/autotest_common.sh@819 -- # '[' -z 130218 ']' 00:18:00.287 16:35:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:00.287 16:35:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:00.287 16:35:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:00.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:00.287 16:35:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:00.287 16:35:31 -- common/autotest_common.sh@10 -- # set +x 00:18:00.547 [2024-07-13 16:35:31.816076] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:00.547 [2024-07-13 16:35:31.816780] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.547 [2024-07-13 16:35:31.980483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.806 [2024-07-13 16:35:32.073047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.806 [2024-07-13 16:35:32.161749] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.372 16:35:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:01.373 16:35:32 -- common/autotest_common.sh@852 -- # return 0 00:18:01.373 16:35:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:01.629 [2024-07-13 16:35:32.985499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:01.630 [2024-07-13 16:35:32.985772] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:01.630 [2024-07-13 16:35:32.985877] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.630 [2024-07-13 16:35:32.985944] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.630 [2024-07-13 16:35:32.986019] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:01.630 [2024-07-13 16:35:32.986096] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:01.630 [2024-07-13 16:35:32.986170] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:01.630 [2024-07-13 16:35:32.986227] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:01.630 16:35:33 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:01.630 16:35:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:01.630 16:35:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:01.630 16:35:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:01.630 16:35:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:01.630 16:35:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:01.630 16:35:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.630 16:35:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.630 16:35:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.630 16:35:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.630 16:35:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.630 16:35:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.886 16:35:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.887 "name": "Existed_Raid", 00:18:01.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.887 "strip_size_kb": 64, 00:18:01.887 "state": "configuring", 00:18:01.887 "raid_level": "concat", 00:18:01.887 "superblock": false, 00:18:01.887 "num_base_bdevs": 4, 00:18:01.887 "num_base_bdevs_discovered": 0, 00:18:01.887 "num_base_bdevs_operational": 4, 00:18:01.887 "base_bdevs_list": [ 00:18:01.887 { 00:18:01.887 "name": "BaseBdev1", 00:18:01.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.887 "is_configured": false, 00:18:01.887 "data_offset": 0, 00:18:01.887 "data_size": 0 00:18:01.887 }, 00:18:01.887 { 00:18:01.887 "name": "BaseBdev2", 00:18:01.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.887 "is_configured": false, 00:18:01.887 "data_offset": 0, 00:18:01.887 "data_size": 0 00:18:01.887 }, 00:18:01.887 { 00:18:01.887 "name": "BaseBdev3", 00:18:01.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.887 "is_configured": false, 00:18:01.887 "data_offset": 0, 00:18:01.887 "data_size": 0 00:18:01.887 }, 00:18:01.887 { 00:18:01.887 "name": "BaseBdev4", 00:18:01.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.887 "is_configured": false, 00:18:01.887 "data_offset": 0, 00:18:01.887 "data_size": 0 00:18:01.887 } 00:18:01.887 ] 00:18:01.887 }' 00:18:01.887 16:35:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.887 16:35:33 -- common/autotest_common.sh@10 -- # set +x 00:18:02.473 16:35:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:02.768 [2024-07-13 16:35:34.141541] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:02.768 [2024-07-13 16:35:34.141827] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:02.768 16:35:34 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:03.026 [2024-07-13 16:35:34.409677] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:03.026 [2024-07-13 16:35:34.410050] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:03.026 [2024-07-13 16:35:34.410149] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:03.026 [2024-07-13 16:35:34.410217] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:03.026 [2024-07-13 16:35:34.410249] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:03.026 [2024-07-13 16:35:34.410352] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:03.026 [2024-07-13 16:35:34.410388] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:03.026 [2024-07-13 16:35:34.410441] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:03.026 16:35:34 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:03.283 [2024-07-13 16:35:34.630806] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:03.283 BaseBdev1 00:18:03.283 16:35:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:03.283 16:35:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:03.283 16:35:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:03.283 16:35:34 -- common/autotest_common.sh@889 -- # local i 00:18:03.283 16:35:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:03.283 16:35:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:03.283 16:35:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:03.541 16:35:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:03.799 [ 00:18:03.799 { 00:18:03.799 "name": "BaseBdev1", 00:18:03.799 "aliases": [ 00:18:03.799 "d0c2ebc7-0441-404e-ad3b-85473ed9c22b" 00:18:03.799 ], 00:18:03.799 "product_name": "Malloc disk", 00:18:03.799 "block_size": 512, 00:18:03.799 "num_blocks": 65536, 00:18:03.799 "uuid": "d0c2ebc7-0441-404e-ad3b-85473ed9c22b", 00:18:03.799 "assigned_rate_limits": { 00:18:03.799 "rw_ios_per_sec": 0, 00:18:03.799 "rw_mbytes_per_sec": 0, 00:18:03.799 "r_mbytes_per_sec": 0, 00:18:03.799 "w_mbytes_per_sec": 0 00:18:03.799 }, 00:18:03.799 "claimed": true, 00:18:03.799 "claim_type": "exclusive_write", 00:18:03.799 "zoned": false, 00:18:03.800 "supported_io_types": { 00:18:03.800 "read": true, 00:18:03.800 "write": true, 00:18:03.800 "unmap": true, 00:18:03.800 "write_zeroes": true, 00:18:03.800 "flush": true, 00:18:03.800 "reset": true, 00:18:03.800 "compare": false, 00:18:03.800 "compare_and_write": false, 00:18:03.800 "abort": true, 00:18:03.800 "nvme_admin": false, 00:18:03.800 "nvme_io": false 00:18:03.800 }, 00:18:03.800 "memory_domains": [ 00:18:03.800 { 00:18:03.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.800 "dma_device_type": 2 00:18:03.800 } 00:18:03.800 ], 00:18:03.800 "driver_specific": {} 00:18:03.800 } 00:18:03.800 ] 00:18:03.800 16:35:35 -- common/autotest_common.sh@895 -- # return 0 00:18:03.800 16:35:35 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:03.800 16:35:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:03.800 16:35:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:03.800 16:35:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:03.800 16:35:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:03.800 16:35:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:03.800 16:35:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.800 16:35:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.800 16:35:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.800 16:35:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.800 16:35:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.800 16:35:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.059 16:35:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:04.059 "name": "Existed_Raid", 00:18:04.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.059 "strip_size_kb": 64, 00:18:04.059 "state": "configuring", 00:18:04.059 "raid_level": "concat", 00:18:04.059 "superblock": false, 00:18:04.059 "num_base_bdevs": 4, 00:18:04.059 "num_base_bdevs_discovered": 1, 00:18:04.059 "num_base_bdevs_operational": 4, 00:18:04.059 "base_bdevs_list": [ 00:18:04.059 { 00:18:04.059 "name": "BaseBdev1", 00:18:04.059 "uuid": "d0c2ebc7-0441-404e-ad3b-85473ed9c22b", 00:18:04.059 "is_configured": true, 00:18:04.059 "data_offset": 0, 00:18:04.059 "data_size": 65536 00:18:04.059 }, 00:18:04.059 { 00:18:04.059 "name": "BaseBdev2", 00:18:04.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.059 "is_configured": false, 00:18:04.059 "data_offset": 0, 00:18:04.059 "data_size": 0 00:18:04.059 }, 00:18:04.059 { 00:18:04.059 "name": "BaseBdev3", 00:18:04.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.059 "is_configured": false, 00:18:04.059 "data_offset": 0, 00:18:04.059 "data_size": 0 00:18:04.059 }, 00:18:04.059 { 00:18:04.059 "name": "BaseBdev4", 00:18:04.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.059 "is_configured": false, 00:18:04.059 "data_offset": 0, 00:18:04.059 "data_size": 0 00:18:04.059 } 00:18:04.059 ] 00:18:04.059 }' 00:18:04.059 16:35:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:04.059 16:35:35 -- common/autotest_common.sh@10 -- # set +x 00:18:04.625 16:35:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:04.625 [2024-07-13 16:35:36.071106] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.625 [2024-07-13 16:35:36.071455] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:04.882 [2024-07-13 16:35:36.287310] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.882 [2024-07-13 16:35:36.290239] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.882 [2024-07-13 16:35:36.290497] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.882 [2024-07-13 16:35:36.290638] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:04.882 [2024-07-13 16:35:36.290707] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:04.882 [2024-07-13 16:35:36.290797] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:04.882 [2024-07-13 16:35:36.290847] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.882 16:35:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.138 16:35:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:05.138 "name": "Existed_Raid", 00:18:05.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.138 "strip_size_kb": 64, 00:18:05.138 "state": "configuring", 00:18:05.138 "raid_level": "concat", 00:18:05.138 "superblock": false, 00:18:05.138 "num_base_bdevs": 4, 00:18:05.138 "num_base_bdevs_discovered": 1, 00:18:05.138 "num_base_bdevs_operational": 4, 00:18:05.138 "base_bdevs_list": [ 00:18:05.138 { 00:18:05.138 "name": "BaseBdev1", 00:18:05.138 "uuid": "d0c2ebc7-0441-404e-ad3b-85473ed9c22b", 00:18:05.138 "is_configured": true, 00:18:05.138 "data_offset": 0, 00:18:05.138 "data_size": 65536 00:18:05.138 }, 00:18:05.138 { 00:18:05.138 "name": "BaseBdev2", 00:18:05.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.138 "is_configured": false, 00:18:05.138 "data_offset": 0, 00:18:05.138 "data_size": 0 00:18:05.138 }, 00:18:05.138 { 00:18:05.138 "name": "BaseBdev3", 00:18:05.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.138 "is_configured": false, 00:18:05.138 "data_offset": 0, 00:18:05.138 "data_size": 0 00:18:05.138 }, 00:18:05.138 { 00:18:05.138 "name": "BaseBdev4", 00:18:05.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.138 "is_configured": false, 00:18:05.138 "data_offset": 0, 00:18:05.138 "data_size": 0 00:18:05.138 } 00:18:05.138 ] 00:18:05.138 }' 00:18:05.138 16:35:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:05.138 16:35:36 -- common/autotest_common.sh@10 -- # set +x 00:18:05.703 16:35:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:05.961 [2024-07-13 16:35:37.346256] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.961 BaseBdev2 00:18:05.961 16:35:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:05.961 16:35:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:05.961 16:35:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:05.961 16:35:37 -- common/autotest_common.sh@889 -- # local i 00:18:05.961 16:35:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:05.961 16:35:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:05.961 16:35:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.217 16:35:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:06.490 [ 00:18:06.490 { 00:18:06.490 "name": "BaseBdev2", 00:18:06.490 "aliases": [ 00:18:06.490 "edbb7fa3-1113-4a2e-ae66-4deba80fff74" 00:18:06.490 ], 00:18:06.490 "product_name": "Malloc disk", 00:18:06.490 "block_size": 512, 00:18:06.490 "num_blocks": 65536, 00:18:06.490 "uuid": "edbb7fa3-1113-4a2e-ae66-4deba80fff74", 00:18:06.490 "assigned_rate_limits": { 00:18:06.490 "rw_ios_per_sec": 0, 00:18:06.490 "rw_mbytes_per_sec": 0, 00:18:06.490 "r_mbytes_per_sec": 0, 00:18:06.490 "w_mbytes_per_sec": 0 00:18:06.490 }, 00:18:06.490 "claimed": true, 00:18:06.490 "claim_type": "exclusive_write", 00:18:06.490 "zoned": false, 00:18:06.490 "supported_io_types": { 00:18:06.490 "read": true, 00:18:06.490 "write": true, 00:18:06.490 "unmap": true, 00:18:06.490 "write_zeroes": true, 00:18:06.490 "flush": true, 00:18:06.490 "reset": true, 00:18:06.490 "compare": false, 00:18:06.490 "compare_and_write": false, 00:18:06.490 "abort": true, 00:18:06.490 "nvme_admin": false, 00:18:06.490 "nvme_io": false 00:18:06.490 }, 00:18:06.490 "memory_domains": [ 00:18:06.490 { 00:18:06.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.490 "dma_device_type": 2 00:18:06.490 } 00:18:06.490 ], 00:18:06.490 "driver_specific": {} 00:18:06.490 } 00:18:06.490 ] 00:18:06.490 16:35:37 -- common/autotest_common.sh@895 -- # return 0 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.490 16:35:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.749 16:35:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.749 "name": "Existed_Raid", 00:18:06.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.749 "strip_size_kb": 64, 00:18:06.749 "state": "configuring", 00:18:06.749 "raid_level": "concat", 00:18:06.749 "superblock": false, 00:18:06.749 "num_base_bdevs": 4, 00:18:06.749 "num_base_bdevs_discovered": 2, 00:18:06.749 "num_base_bdevs_operational": 4, 00:18:06.749 "base_bdevs_list": [ 00:18:06.749 { 00:18:06.749 "name": "BaseBdev1", 00:18:06.749 "uuid": "d0c2ebc7-0441-404e-ad3b-85473ed9c22b", 00:18:06.749 "is_configured": true, 00:18:06.749 "data_offset": 0, 00:18:06.749 "data_size": 65536 00:18:06.749 }, 00:18:06.749 { 00:18:06.749 "name": "BaseBdev2", 00:18:06.749 "uuid": "edbb7fa3-1113-4a2e-ae66-4deba80fff74", 00:18:06.749 "is_configured": true, 00:18:06.749 "data_offset": 0, 00:18:06.749 "data_size": 65536 00:18:06.749 }, 00:18:06.749 { 00:18:06.749 "name": "BaseBdev3", 00:18:06.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.749 "is_configured": false, 00:18:06.749 "data_offset": 0, 00:18:06.749 "data_size": 0 00:18:06.749 }, 00:18:06.749 { 00:18:06.749 "name": "BaseBdev4", 00:18:06.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.749 "is_configured": false, 00:18:06.749 "data_offset": 0, 00:18:06.749 "data_size": 0 00:18:06.749 } 00:18:06.749 ] 00:18:06.749 }' 00:18:06.749 16:35:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.749 16:35:38 -- common/autotest_common.sh@10 -- # set +x 00:18:07.317 16:35:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:07.575 [2024-07-13 16:35:39.036539] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:07.575 BaseBdev3 00:18:07.833 16:35:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:07.833 16:35:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:07.833 16:35:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:07.833 16:35:39 -- common/autotest_common.sh@889 -- # local i 00:18:07.833 16:35:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:07.833 16:35:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:07.833 16:35:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:07.833 16:35:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:08.091 [ 00:18:08.091 { 00:18:08.091 "name": "BaseBdev3", 00:18:08.091 "aliases": [ 00:18:08.091 "0d19b1d5-0628-411a-878f-3affcfd4098c" 00:18:08.091 ], 00:18:08.091 "product_name": "Malloc disk", 00:18:08.091 "block_size": 512, 00:18:08.091 "num_blocks": 65536, 00:18:08.091 "uuid": "0d19b1d5-0628-411a-878f-3affcfd4098c", 00:18:08.091 "assigned_rate_limits": { 00:18:08.091 "rw_ios_per_sec": 0, 00:18:08.091 "rw_mbytes_per_sec": 0, 00:18:08.091 "r_mbytes_per_sec": 0, 00:18:08.091 "w_mbytes_per_sec": 0 00:18:08.091 }, 00:18:08.091 "claimed": true, 00:18:08.091 "claim_type": "exclusive_write", 00:18:08.091 "zoned": false, 00:18:08.091 "supported_io_types": { 00:18:08.091 "read": true, 00:18:08.091 "write": true, 00:18:08.091 "unmap": true, 00:18:08.091 "write_zeroes": true, 00:18:08.091 "flush": true, 00:18:08.091 "reset": true, 00:18:08.091 "compare": false, 00:18:08.091 "compare_and_write": false, 00:18:08.091 "abort": true, 00:18:08.091 "nvme_admin": false, 00:18:08.091 "nvme_io": false 00:18:08.091 }, 00:18:08.091 "memory_domains": [ 00:18:08.091 { 00:18:08.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.091 "dma_device_type": 2 00:18:08.091 } 00:18:08.091 ], 00:18:08.091 "driver_specific": {} 00:18:08.091 } 00:18:08.091 ] 00:18:08.091 16:35:39 -- common/autotest_common.sh@895 -- # return 0 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.091 16:35:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.348 16:35:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.348 "name": "Existed_Raid", 00:18:08.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.348 "strip_size_kb": 64, 00:18:08.348 "state": "configuring", 00:18:08.348 "raid_level": "concat", 00:18:08.348 "superblock": false, 00:18:08.348 "num_base_bdevs": 4, 00:18:08.348 "num_base_bdevs_discovered": 3, 00:18:08.348 "num_base_bdevs_operational": 4, 00:18:08.348 "base_bdevs_list": [ 00:18:08.348 { 00:18:08.348 "name": "BaseBdev1", 00:18:08.348 "uuid": "d0c2ebc7-0441-404e-ad3b-85473ed9c22b", 00:18:08.348 "is_configured": true, 00:18:08.348 "data_offset": 0, 00:18:08.348 "data_size": 65536 00:18:08.348 }, 00:18:08.348 { 00:18:08.348 "name": "BaseBdev2", 00:18:08.348 "uuid": "edbb7fa3-1113-4a2e-ae66-4deba80fff74", 00:18:08.348 "is_configured": true, 00:18:08.348 "data_offset": 0, 00:18:08.348 "data_size": 65536 00:18:08.348 }, 00:18:08.348 { 00:18:08.348 "name": "BaseBdev3", 00:18:08.349 "uuid": "0d19b1d5-0628-411a-878f-3affcfd4098c", 00:18:08.349 "is_configured": true, 00:18:08.349 "data_offset": 0, 00:18:08.349 "data_size": 65536 00:18:08.349 }, 00:18:08.349 { 00:18:08.349 "name": "BaseBdev4", 00:18:08.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.349 "is_configured": false, 00:18:08.349 "data_offset": 0, 00:18:08.349 "data_size": 0 00:18:08.349 } 00:18:08.349 ] 00:18:08.349 }' 00:18:08.349 16:35:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.349 16:35:39 -- common/autotest_common.sh@10 -- # set +x 00:18:08.913 16:35:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:09.170 [2024-07-13 16:35:40.570982] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:09.170 [2024-07-13 16:35:40.571371] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:18:09.170 [2024-07-13 16:35:40.571425] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:09.170 [2024-07-13 16:35:40.571729] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:18:09.170 [2024-07-13 16:35:40.572356] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:18:09.170 [2024-07-13 16:35:40.572512] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:18:09.170 [2024-07-13 16:35:40.572942] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.170 BaseBdev4 00:18:09.170 16:35:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:09.170 16:35:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:09.170 16:35:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:09.170 16:35:40 -- common/autotest_common.sh@889 -- # local i 00:18:09.170 16:35:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:09.170 16:35:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:09.170 16:35:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:09.427 16:35:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:09.685 [ 00:18:09.685 { 00:18:09.685 "name": "BaseBdev4", 00:18:09.685 "aliases": [ 00:18:09.685 "9a9c590a-a732-45bd-bc6f-fff9336e08da" 00:18:09.685 ], 00:18:09.685 "product_name": "Malloc disk", 00:18:09.685 "block_size": 512, 00:18:09.685 "num_blocks": 65536, 00:18:09.685 "uuid": "9a9c590a-a732-45bd-bc6f-fff9336e08da", 00:18:09.685 "assigned_rate_limits": { 00:18:09.685 "rw_ios_per_sec": 0, 00:18:09.685 "rw_mbytes_per_sec": 0, 00:18:09.685 "r_mbytes_per_sec": 0, 00:18:09.685 "w_mbytes_per_sec": 0 00:18:09.685 }, 00:18:09.685 "claimed": true, 00:18:09.685 "claim_type": "exclusive_write", 00:18:09.685 "zoned": false, 00:18:09.685 "supported_io_types": { 00:18:09.685 "read": true, 00:18:09.685 "write": true, 00:18:09.685 "unmap": true, 00:18:09.685 "write_zeroes": true, 00:18:09.685 "flush": true, 00:18:09.685 "reset": true, 00:18:09.685 "compare": false, 00:18:09.685 "compare_and_write": false, 00:18:09.685 "abort": true, 00:18:09.685 "nvme_admin": false, 00:18:09.685 "nvme_io": false 00:18:09.685 }, 00:18:09.685 "memory_domains": [ 00:18:09.685 { 00:18:09.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.685 "dma_device_type": 2 00:18:09.685 } 00:18:09.685 ], 00:18:09.685 "driver_specific": {} 00:18:09.685 } 00:18:09.685 ] 00:18:09.685 16:35:40 -- common/autotest_common.sh@895 -- # return 0 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.685 16:35:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.943 16:35:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:09.943 "name": "Existed_Raid", 00:18:09.943 "uuid": "0e761143-7696-4f6a-903f-028d782c280d", 00:18:09.943 "strip_size_kb": 64, 00:18:09.943 "state": "online", 00:18:09.943 "raid_level": "concat", 00:18:09.943 "superblock": false, 00:18:09.943 "num_base_bdevs": 4, 00:18:09.943 "num_base_bdevs_discovered": 4, 00:18:09.943 "num_base_bdevs_operational": 4, 00:18:09.943 "base_bdevs_list": [ 00:18:09.943 { 00:18:09.943 "name": "BaseBdev1", 00:18:09.943 "uuid": "d0c2ebc7-0441-404e-ad3b-85473ed9c22b", 00:18:09.943 "is_configured": true, 00:18:09.943 "data_offset": 0, 00:18:09.943 "data_size": 65536 00:18:09.943 }, 00:18:09.943 { 00:18:09.943 "name": "BaseBdev2", 00:18:09.943 "uuid": "edbb7fa3-1113-4a2e-ae66-4deba80fff74", 00:18:09.943 "is_configured": true, 00:18:09.943 "data_offset": 0, 00:18:09.943 "data_size": 65536 00:18:09.943 }, 00:18:09.943 { 00:18:09.943 "name": "BaseBdev3", 00:18:09.943 "uuid": "0d19b1d5-0628-411a-878f-3affcfd4098c", 00:18:09.943 "is_configured": true, 00:18:09.943 "data_offset": 0, 00:18:09.943 "data_size": 65536 00:18:09.943 }, 00:18:09.943 { 00:18:09.943 "name": "BaseBdev4", 00:18:09.943 "uuid": "9a9c590a-a732-45bd-bc6f-fff9336e08da", 00:18:09.943 "is_configured": true, 00:18:09.943 "data_offset": 0, 00:18:09.943 "data_size": 65536 00:18:09.943 } 00:18:09.943 ] 00:18:09.943 }' 00:18:09.943 16:35:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:09.943 16:35:41 -- common/autotest_common.sh@10 -- # set +x 00:18:10.508 16:35:41 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:10.766 [2024-07-13 16:35:42.049053] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:10.766 [2024-07-13 16:35:42.049365] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:10.766 [2024-07-13 16:35:42.049608] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.766 16:35:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.023 16:35:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.023 "name": "Existed_Raid", 00:18:11.023 "uuid": "0e761143-7696-4f6a-903f-028d782c280d", 00:18:11.023 "strip_size_kb": 64, 00:18:11.023 "state": "offline", 00:18:11.023 "raid_level": "concat", 00:18:11.023 "superblock": false, 00:18:11.023 "num_base_bdevs": 4, 00:18:11.023 "num_base_bdevs_discovered": 3, 00:18:11.023 "num_base_bdevs_operational": 3, 00:18:11.023 "base_bdevs_list": [ 00:18:11.023 { 00:18:11.023 "name": null, 00:18:11.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.023 "is_configured": false, 00:18:11.023 "data_offset": 0, 00:18:11.023 "data_size": 65536 00:18:11.023 }, 00:18:11.023 { 00:18:11.023 "name": "BaseBdev2", 00:18:11.023 "uuid": "edbb7fa3-1113-4a2e-ae66-4deba80fff74", 00:18:11.023 "is_configured": true, 00:18:11.023 "data_offset": 0, 00:18:11.023 "data_size": 65536 00:18:11.023 }, 00:18:11.023 { 00:18:11.023 "name": "BaseBdev3", 00:18:11.023 "uuid": "0d19b1d5-0628-411a-878f-3affcfd4098c", 00:18:11.023 "is_configured": true, 00:18:11.023 "data_offset": 0, 00:18:11.023 "data_size": 65536 00:18:11.023 }, 00:18:11.023 { 00:18:11.023 "name": "BaseBdev4", 00:18:11.023 "uuid": "9a9c590a-a732-45bd-bc6f-fff9336e08da", 00:18:11.023 "is_configured": true, 00:18:11.023 "data_offset": 0, 00:18:11.023 "data_size": 65536 00:18:11.023 } 00:18:11.023 ] 00:18:11.023 }' 00:18:11.023 16:35:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.023 16:35:42 -- common/autotest_common.sh@10 -- # set +x 00:18:11.588 16:35:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:11.588 16:35:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:11.588 16:35:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.588 16:35:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:11.845 16:35:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:11.845 16:35:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:11.845 16:35:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:12.102 [2024-07-13 16:35:43.473166] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:12.102 16:35:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:12.102 16:35:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:12.102 16:35:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.102 16:35:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:12.358 16:35:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:12.358 16:35:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:12.358 16:35:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:12.615 [2024-07-13 16:35:43.963705] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:12.615 16:35:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:12.615 16:35:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:12.615 16:35:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.615 16:35:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:12.873 16:35:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:12.873 16:35:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:12.873 16:35:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:13.130 [2024-07-13 16:35:44.406233] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:13.130 [2024-07-13 16:35:44.406620] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:18:13.130 16:35:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:13.130 16:35:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:13.130 16:35:44 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.130 16:35:44 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:13.388 16:35:44 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:13.388 16:35:44 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:13.388 16:35:44 -- bdev/bdev_raid.sh@287 -- # killprocess 130218 00:18:13.388 16:35:44 -- common/autotest_common.sh@926 -- # '[' -z 130218 ']' 00:18:13.388 16:35:44 -- common/autotest_common.sh@930 -- # kill -0 130218 00:18:13.388 16:35:44 -- common/autotest_common.sh@931 -- # uname 00:18:13.388 16:35:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:13.388 16:35:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130218 00:18:13.388 16:35:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:13.388 16:35:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:13.388 16:35:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130218' 00:18:13.388 killing process with pid 130218 00:18:13.388 16:35:44 -- common/autotest_common.sh@945 -- # kill 130218 00:18:13.388 16:35:44 -- common/autotest_common.sh@950 -- # wait 130218 00:18:13.388 [2024-07-13 16:35:44.709257] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:13.388 [2024-07-13 16:35:44.709387] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:13.955 00:18:13.955 real 0m13.397s 00:18:13.955 user 0m23.702s 00:18:13.955 sys 0m2.390s 00:18:13.955 16:35:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.955 16:35:45 -- common/autotest_common.sh@10 -- # set +x 00:18:13.955 ************************************ 00:18:13.955 END TEST raid_state_function_test 00:18:13.955 ************************************ 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:18:13.955 16:35:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:13.955 16:35:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:13.955 16:35:45 -- common/autotest_common.sh@10 -- # set +x 00:18:13.955 ************************************ 00:18:13.955 START TEST raid_state_function_test_sb 00:18:13.955 ************************************ 00:18:13.955 16:35:45 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@226 -- # raid_pid=130645 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130645' 00:18:13.955 Process raid pid: 130645 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:13.955 16:35:45 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130645 /var/tmp/spdk-raid.sock 00:18:13.955 16:35:45 -- common/autotest_common.sh@819 -- # '[' -z 130645 ']' 00:18:13.955 16:35:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:13.955 16:35:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:13.955 16:35:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:13.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:13.956 16:35:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:13.956 16:35:45 -- common/autotest_common.sh@10 -- # set +x 00:18:13.956 [2024-07-13 16:35:45.290181] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:13.956 [2024-07-13 16:35:45.290690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.214 [2024-07-13 16:35:45.439808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.214 [2024-07-13 16:35:45.536643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.214 [2024-07-13 16:35:45.618274] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.151 16:35:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:15.151 16:35:46 -- common/autotest_common.sh@852 -- # return 0 00:18:15.151 16:35:46 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:15.151 [2024-07-13 16:35:46.549832] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:15.151 [2024-07-13 16:35:46.550261] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:15.151 [2024-07-13 16:35:46.550417] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.151 [2024-07-13 16:35:46.550484] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.151 [2024-07-13 16:35:46.550583] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:15.151 [2024-07-13 16:35:46.550687] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:15.151 [2024-07-13 16:35:46.550724] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:15.151 [2024-07-13 16:35:46.550845] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:15.151 16:35:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:15.151 16:35:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:15.151 16:35:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:15.151 16:35:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:15.151 16:35:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:15.151 16:35:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:15.151 16:35:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.152 16:35:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.152 16:35:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.152 16:35:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.152 16:35:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.152 16:35:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.409 16:35:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.410 "name": "Existed_Raid", 00:18:15.410 "uuid": "496042f1-4927-4901-b162-8b3c1a8c448b", 00:18:15.410 "strip_size_kb": 64, 00:18:15.410 "state": "configuring", 00:18:15.410 "raid_level": "concat", 00:18:15.410 "superblock": true, 00:18:15.410 "num_base_bdevs": 4, 00:18:15.410 "num_base_bdevs_discovered": 0, 00:18:15.410 "num_base_bdevs_operational": 4, 00:18:15.410 "base_bdevs_list": [ 00:18:15.410 { 00:18:15.410 "name": "BaseBdev1", 00:18:15.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.410 "is_configured": false, 00:18:15.410 "data_offset": 0, 00:18:15.410 "data_size": 0 00:18:15.410 }, 00:18:15.410 { 00:18:15.410 "name": "BaseBdev2", 00:18:15.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.410 "is_configured": false, 00:18:15.410 "data_offset": 0, 00:18:15.410 "data_size": 0 00:18:15.410 }, 00:18:15.410 { 00:18:15.410 "name": "BaseBdev3", 00:18:15.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.410 "is_configured": false, 00:18:15.410 "data_offset": 0, 00:18:15.410 "data_size": 0 00:18:15.410 }, 00:18:15.410 { 00:18:15.410 "name": "BaseBdev4", 00:18:15.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.410 "is_configured": false, 00:18:15.410 "data_offset": 0, 00:18:15.410 "data_size": 0 00:18:15.410 } 00:18:15.410 ] 00:18:15.410 }' 00:18:15.410 16:35:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.410 16:35:46 -- common/autotest_common.sh@10 -- # set +x 00:18:15.978 16:35:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:16.238 [2024-07-13 16:35:47.597844] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.238 [2024-07-13 16:35:47.598245] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:16.238 16:35:47 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:16.497 [2024-07-13 16:35:47.813963] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:16.497 [2024-07-13 16:35:47.814340] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:16.497 [2024-07-13 16:35:47.814427] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:16.497 [2024-07-13 16:35:47.814493] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:16.497 [2024-07-13 16:35:47.814523] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:16.497 [2024-07-13 16:35:47.814564] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:16.497 [2024-07-13 16:35:47.814653] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:16.497 [2024-07-13 16:35:47.814713] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:16.497 16:35:47 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:16.757 [2024-07-13 16:35:48.046771] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.757 BaseBdev1 00:18:16.757 16:35:48 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:16.757 16:35:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:16.757 16:35:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:16.757 16:35:48 -- common/autotest_common.sh@889 -- # local i 00:18:16.757 16:35:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:16.757 16:35:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:16.757 16:35:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:17.016 16:35:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:17.275 [ 00:18:17.275 { 00:18:17.275 "name": "BaseBdev1", 00:18:17.275 "aliases": [ 00:18:17.275 "be7c234b-e1ed-4543-9651-9d114a4c879b" 00:18:17.275 ], 00:18:17.275 "product_name": "Malloc disk", 00:18:17.275 "block_size": 512, 00:18:17.275 "num_blocks": 65536, 00:18:17.275 "uuid": "be7c234b-e1ed-4543-9651-9d114a4c879b", 00:18:17.275 "assigned_rate_limits": { 00:18:17.275 "rw_ios_per_sec": 0, 00:18:17.275 "rw_mbytes_per_sec": 0, 00:18:17.275 "r_mbytes_per_sec": 0, 00:18:17.275 "w_mbytes_per_sec": 0 00:18:17.275 }, 00:18:17.275 "claimed": true, 00:18:17.275 "claim_type": "exclusive_write", 00:18:17.275 "zoned": false, 00:18:17.275 "supported_io_types": { 00:18:17.275 "read": true, 00:18:17.275 "write": true, 00:18:17.275 "unmap": true, 00:18:17.275 "write_zeroes": true, 00:18:17.275 "flush": true, 00:18:17.275 "reset": true, 00:18:17.275 "compare": false, 00:18:17.275 "compare_and_write": false, 00:18:17.275 "abort": true, 00:18:17.275 "nvme_admin": false, 00:18:17.275 "nvme_io": false 00:18:17.275 }, 00:18:17.275 "memory_domains": [ 00:18:17.275 { 00:18:17.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.275 "dma_device_type": 2 00:18:17.275 } 00:18:17.275 ], 00:18:17.275 "driver_specific": {} 00:18:17.275 } 00:18:17.275 ] 00:18:17.275 16:35:48 -- common/autotest_common.sh@895 -- # return 0 00:18:17.275 16:35:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:17.275 16:35:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:17.275 16:35:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:17.275 16:35:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:17.275 16:35:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:17.275 16:35:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:17.275 16:35:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.275 16:35:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.275 16:35:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.275 16:35:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.275 16:35:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.275 16:35:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.534 16:35:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:17.534 "name": "Existed_Raid", 00:18:17.534 "uuid": "bd49aea1-f231-4847-899c-6f9fa8c0b2c1", 00:18:17.534 "strip_size_kb": 64, 00:18:17.534 "state": "configuring", 00:18:17.534 "raid_level": "concat", 00:18:17.534 "superblock": true, 00:18:17.534 "num_base_bdevs": 4, 00:18:17.534 "num_base_bdevs_discovered": 1, 00:18:17.534 "num_base_bdevs_operational": 4, 00:18:17.534 "base_bdevs_list": [ 00:18:17.534 { 00:18:17.534 "name": "BaseBdev1", 00:18:17.534 "uuid": "be7c234b-e1ed-4543-9651-9d114a4c879b", 00:18:17.534 "is_configured": true, 00:18:17.534 "data_offset": 2048, 00:18:17.534 "data_size": 63488 00:18:17.534 }, 00:18:17.534 { 00:18:17.534 "name": "BaseBdev2", 00:18:17.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.534 "is_configured": false, 00:18:17.534 "data_offset": 0, 00:18:17.534 "data_size": 0 00:18:17.534 }, 00:18:17.534 { 00:18:17.534 "name": "BaseBdev3", 00:18:17.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.534 "is_configured": false, 00:18:17.534 "data_offset": 0, 00:18:17.534 "data_size": 0 00:18:17.534 }, 00:18:17.534 { 00:18:17.534 "name": "BaseBdev4", 00:18:17.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.535 "is_configured": false, 00:18:17.535 "data_offset": 0, 00:18:17.535 "data_size": 0 00:18:17.535 } 00:18:17.535 ] 00:18:17.535 }' 00:18:17.535 16:35:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:17.535 16:35:48 -- common/autotest_common.sh@10 -- # set +x 00:18:18.103 16:35:49 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:18.104 [2024-07-13 16:35:49.547095] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:18.104 [2024-07-13 16:35:49.547473] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:18.104 16:35:49 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:18.104 16:35:49 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:18.363 16:35:49 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:18.622 BaseBdev1 00:18:18.622 16:35:50 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:18.622 16:35:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:18.622 16:35:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:18.622 16:35:50 -- common/autotest_common.sh@889 -- # local i 00:18:18.622 16:35:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:18.622 16:35:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:18.622 16:35:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:18.881 16:35:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:19.140 [ 00:18:19.140 { 00:18:19.140 "name": "BaseBdev1", 00:18:19.140 "aliases": [ 00:18:19.140 "c7b2358a-d346-4478-815f-3f8660770b2b" 00:18:19.140 ], 00:18:19.140 "product_name": "Malloc disk", 00:18:19.140 "block_size": 512, 00:18:19.140 "num_blocks": 65536, 00:18:19.140 "uuid": "c7b2358a-d346-4478-815f-3f8660770b2b", 00:18:19.140 "assigned_rate_limits": { 00:18:19.140 "rw_ios_per_sec": 0, 00:18:19.140 "rw_mbytes_per_sec": 0, 00:18:19.140 "r_mbytes_per_sec": 0, 00:18:19.140 "w_mbytes_per_sec": 0 00:18:19.140 }, 00:18:19.140 "claimed": false, 00:18:19.140 "zoned": false, 00:18:19.140 "supported_io_types": { 00:18:19.140 "read": true, 00:18:19.140 "write": true, 00:18:19.140 "unmap": true, 00:18:19.140 "write_zeroes": true, 00:18:19.140 "flush": true, 00:18:19.140 "reset": true, 00:18:19.140 "compare": false, 00:18:19.140 "compare_and_write": false, 00:18:19.140 "abort": true, 00:18:19.140 "nvme_admin": false, 00:18:19.140 "nvme_io": false 00:18:19.140 }, 00:18:19.140 "memory_domains": [ 00:18:19.140 { 00:18:19.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.140 "dma_device_type": 2 00:18:19.140 } 00:18:19.140 ], 00:18:19.140 "driver_specific": {} 00:18:19.140 } 00:18:19.140 ] 00:18:19.140 16:35:50 -- common/autotest_common.sh@895 -- # return 0 00:18:19.140 16:35:50 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:19.399 [2024-07-13 16:35:50.690144] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.399 [2024-07-13 16:35:50.693066] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:19.399 [2024-07-13 16:35:50.693332] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:19.399 [2024-07-13 16:35:50.693452] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:19.399 [2024-07-13 16:35:50.693517] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:19.399 [2024-07-13 16:35:50.693611] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:19.399 [2024-07-13 16:35:50.693669] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.399 16:35:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.658 16:35:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:19.658 "name": "Existed_Raid", 00:18:19.658 "uuid": "4efc46f6-2d21-45da-8b07-6a788d6487b4", 00:18:19.658 "strip_size_kb": 64, 00:18:19.658 "state": "configuring", 00:18:19.658 "raid_level": "concat", 00:18:19.658 "superblock": true, 00:18:19.658 "num_base_bdevs": 4, 00:18:19.658 "num_base_bdevs_discovered": 1, 00:18:19.658 "num_base_bdevs_operational": 4, 00:18:19.658 "base_bdevs_list": [ 00:18:19.658 { 00:18:19.658 "name": "BaseBdev1", 00:18:19.658 "uuid": "c7b2358a-d346-4478-815f-3f8660770b2b", 00:18:19.658 "is_configured": true, 00:18:19.658 "data_offset": 2048, 00:18:19.658 "data_size": 63488 00:18:19.658 }, 00:18:19.658 { 00:18:19.658 "name": "BaseBdev2", 00:18:19.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.658 "is_configured": false, 00:18:19.658 "data_offset": 0, 00:18:19.658 "data_size": 0 00:18:19.658 }, 00:18:19.658 { 00:18:19.658 "name": "BaseBdev3", 00:18:19.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.658 "is_configured": false, 00:18:19.658 "data_offset": 0, 00:18:19.658 "data_size": 0 00:18:19.658 }, 00:18:19.658 { 00:18:19.658 "name": "BaseBdev4", 00:18:19.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.658 "is_configured": false, 00:18:19.658 "data_offset": 0, 00:18:19.658 "data_size": 0 00:18:19.658 } 00:18:19.658 ] 00:18:19.658 }' 00:18:19.658 16:35:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:19.658 16:35:50 -- common/autotest_common.sh@10 -- # set +x 00:18:20.322 16:35:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:20.603 [2024-07-13 16:35:51.834286] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.603 BaseBdev2 00:18:20.603 16:35:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:20.603 16:35:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:20.603 16:35:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:20.603 16:35:51 -- common/autotest_common.sh@889 -- # local i 00:18:20.603 16:35:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:20.603 16:35:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:20.603 16:35:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:20.862 16:35:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:20.862 [ 00:18:20.862 { 00:18:20.862 "name": "BaseBdev2", 00:18:20.862 "aliases": [ 00:18:20.862 "7515322b-e76e-4170-aaf8-cae612fac7b8" 00:18:20.862 ], 00:18:20.862 "product_name": "Malloc disk", 00:18:20.862 "block_size": 512, 00:18:20.862 "num_blocks": 65536, 00:18:20.862 "uuid": "7515322b-e76e-4170-aaf8-cae612fac7b8", 00:18:20.862 "assigned_rate_limits": { 00:18:20.862 "rw_ios_per_sec": 0, 00:18:20.862 "rw_mbytes_per_sec": 0, 00:18:20.862 "r_mbytes_per_sec": 0, 00:18:20.862 "w_mbytes_per_sec": 0 00:18:20.862 }, 00:18:20.862 "claimed": true, 00:18:20.862 "claim_type": "exclusive_write", 00:18:20.862 "zoned": false, 00:18:20.862 "supported_io_types": { 00:18:20.862 "read": true, 00:18:20.862 "write": true, 00:18:20.862 "unmap": true, 00:18:20.862 "write_zeroes": true, 00:18:20.862 "flush": true, 00:18:20.862 "reset": true, 00:18:20.862 "compare": false, 00:18:20.862 "compare_and_write": false, 00:18:20.862 "abort": true, 00:18:20.862 "nvme_admin": false, 00:18:20.862 "nvme_io": false 00:18:20.862 }, 00:18:20.862 "memory_domains": [ 00:18:20.862 { 00:18:20.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.862 "dma_device_type": 2 00:18:20.862 } 00:18:20.862 ], 00:18:20.862 "driver_specific": {} 00:18:20.862 } 00:18:20.862 ] 00:18:20.862 16:35:52 -- common/autotest_common.sh@895 -- # return 0 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.862 16:35:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.121 16:35:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:21.121 "name": "Existed_Raid", 00:18:21.121 "uuid": "4efc46f6-2d21-45da-8b07-6a788d6487b4", 00:18:21.121 "strip_size_kb": 64, 00:18:21.121 "state": "configuring", 00:18:21.121 "raid_level": "concat", 00:18:21.121 "superblock": true, 00:18:21.121 "num_base_bdevs": 4, 00:18:21.121 "num_base_bdevs_discovered": 2, 00:18:21.121 "num_base_bdevs_operational": 4, 00:18:21.121 "base_bdevs_list": [ 00:18:21.121 { 00:18:21.121 "name": "BaseBdev1", 00:18:21.121 "uuid": "c7b2358a-d346-4478-815f-3f8660770b2b", 00:18:21.121 "is_configured": true, 00:18:21.121 "data_offset": 2048, 00:18:21.121 "data_size": 63488 00:18:21.121 }, 00:18:21.121 { 00:18:21.121 "name": "BaseBdev2", 00:18:21.121 "uuid": "7515322b-e76e-4170-aaf8-cae612fac7b8", 00:18:21.121 "is_configured": true, 00:18:21.121 "data_offset": 2048, 00:18:21.121 "data_size": 63488 00:18:21.121 }, 00:18:21.121 { 00:18:21.121 "name": "BaseBdev3", 00:18:21.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.121 "is_configured": false, 00:18:21.121 "data_offset": 0, 00:18:21.121 "data_size": 0 00:18:21.121 }, 00:18:21.121 { 00:18:21.121 "name": "BaseBdev4", 00:18:21.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.121 "is_configured": false, 00:18:21.121 "data_offset": 0, 00:18:21.121 "data_size": 0 00:18:21.121 } 00:18:21.121 ] 00:18:21.121 }' 00:18:21.121 16:35:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:21.121 16:35:52 -- common/autotest_common.sh@10 -- # set +x 00:18:22.053 16:35:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:22.053 [2024-07-13 16:35:53.492531] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:22.053 BaseBdev3 00:18:22.053 16:35:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:22.053 16:35:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:22.053 16:35:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:22.053 16:35:53 -- common/autotest_common.sh@889 -- # local i 00:18:22.053 16:35:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:22.053 16:35:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:22.053 16:35:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.618 16:35:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:22.618 [ 00:18:22.618 { 00:18:22.618 "name": "BaseBdev3", 00:18:22.618 "aliases": [ 00:18:22.618 "c832d0fd-1d2a-4104-b323-1e9d15c9b43d" 00:18:22.618 ], 00:18:22.618 "product_name": "Malloc disk", 00:18:22.618 "block_size": 512, 00:18:22.618 "num_blocks": 65536, 00:18:22.618 "uuid": "c832d0fd-1d2a-4104-b323-1e9d15c9b43d", 00:18:22.618 "assigned_rate_limits": { 00:18:22.618 "rw_ios_per_sec": 0, 00:18:22.618 "rw_mbytes_per_sec": 0, 00:18:22.618 "r_mbytes_per_sec": 0, 00:18:22.618 "w_mbytes_per_sec": 0 00:18:22.618 }, 00:18:22.618 "claimed": true, 00:18:22.618 "claim_type": "exclusive_write", 00:18:22.618 "zoned": false, 00:18:22.618 "supported_io_types": { 00:18:22.618 "read": true, 00:18:22.618 "write": true, 00:18:22.618 "unmap": true, 00:18:22.618 "write_zeroes": true, 00:18:22.618 "flush": true, 00:18:22.618 "reset": true, 00:18:22.618 "compare": false, 00:18:22.618 "compare_and_write": false, 00:18:22.618 "abort": true, 00:18:22.618 "nvme_admin": false, 00:18:22.618 "nvme_io": false 00:18:22.618 }, 00:18:22.618 "memory_domains": [ 00:18:22.618 { 00:18:22.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.618 "dma_device_type": 2 00:18:22.618 } 00:18:22.618 ], 00:18:22.618 "driver_specific": {} 00:18:22.618 } 00:18:22.618 ] 00:18:22.618 16:35:54 -- common/autotest_common.sh@895 -- # return 0 00:18:22.618 16:35:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:22.618 16:35:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:22.618 16:35:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:22.618 16:35:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:22.618 16:35:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:22.618 16:35:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:22.618 16:35:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:22.618 16:35:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:22.618 16:35:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:22.618 16:35:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:22.618 16:35:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:22.618 16:35:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:22.619 16:35:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.619 16:35:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.877 16:35:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.877 "name": "Existed_Raid", 00:18:22.877 "uuid": "4efc46f6-2d21-45da-8b07-6a788d6487b4", 00:18:22.877 "strip_size_kb": 64, 00:18:22.877 "state": "configuring", 00:18:22.877 "raid_level": "concat", 00:18:22.877 "superblock": true, 00:18:22.877 "num_base_bdevs": 4, 00:18:22.877 "num_base_bdevs_discovered": 3, 00:18:22.877 "num_base_bdevs_operational": 4, 00:18:22.877 "base_bdevs_list": [ 00:18:22.877 { 00:18:22.877 "name": "BaseBdev1", 00:18:22.877 "uuid": "c7b2358a-d346-4478-815f-3f8660770b2b", 00:18:22.877 "is_configured": true, 00:18:22.877 "data_offset": 2048, 00:18:22.877 "data_size": 63488 00:18:22.877 }, 00:18:22.877 { 00:18:22.877 "name": "BaseBdev2", 00:18:22.877 "uuid": "7515322b-e76e-4170-aaf8-cae612fac7b8", 00:18:22.877 "is_configured": true, 00:18:22.877 "data_offset": 2048, 00:18:22.877 "data_size": 63488 00:18:22.877 }, 00:18:22.877 { 00:18:22.877 "name": "BaseBdev3", 00:18:22.877 "uuid": "c832d0fd-1d2a-4104-b323-1e9d15c9b43d", 00:18:22.877 "is_configured": true, 00:18:22.877 "data_offset": 2048, 00:18:22.877 "data_size": 63488 00:18:22.877 }, 00:18:22.877 { 00:18:22.877 "name": "BaseBdev4", 00:18:22.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.877 "is_configured": false, 00:18:22.877 "data_offset": 0, 00:18:22.877 "data_size": 0 00:18:22.877 } 00:18:22.877 ] 00:18:22.877 }' 00:18:22.877 16:35:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.877 16:35:54 -- common/autotest_common.sh@10 -- # set +x 00:18:23.444 16:35:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:23.704 [2024-07-13 16:35:55.038991] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:23.704 [2024-07-13 16:35:55.039600] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:18:23.704 [2024-07-13 16:35:55.039723] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:23.704 [2024-07-13 16:35:55.039920] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:18:23.704 [2024-07-13 16:35:55.040499] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:18:23.704 [2024-07-13 16:35:55.040618] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:18:23.704 [2024-07-13 16:35:55.040922] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.704 BaseBdev4 00:18:23.704 16:35:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:23.704 16:35:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:23.704 16:35:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:23.704 16:35:55 -- common/autotest_common.sh@889 -- # local i 00:18:23.704 16:35:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:23.704 16:35:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:23.704 16:35:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:23.963 16:35:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:24.221 [ 00:18:24.221 { 00:18:24.221 "name": "BaseBdev4", 00:18:24.221 "aliases": [ 00:18:24.221 "3a08b4a0-2a8a-4f9b-b9e0-13598824f1f8" 00:18:24.221 ], 00:18:24.221 "product_name": "Malloc disk", 00:18:24.221 "block_size": 512, 00:18:24.221 "num_blocks": 65536, 00:18:24.221 "uuid": "3a08b4a0-2a8a-4f9b-b9e0-13598824f1f8", 00:18:24.221 "assigned_rate_limits": { 00:18:24.221 "rw_ios_per_sec": 0, 00:18:24.221 "rw_mbytes_per_sec": 0, 00:18:24.221 "r_mbytes_per_sec": 0, 00:18:24.221 "w_mbytes_per_sec": 0 00:18:24.221 }, 00:18:24.221 "claimed": true, 00:18:24.221 "claim_type": "exclusive_write", 00:18:24.221 "zoned": false, 00:18:24.221 "supported_io_types": { 00:18:24.221 "read": true, 00:18:24.221 "write": true, 00:18:24.221 "unmap": true, 00:18:24.221 "write_zeroes": true, 00:18:24.221 "flush": true, 00:18:24.221 "reset": true, 00:18:24.221 "compare": false, 00:18:24.221 "compare_and_write": false, 00:18:24.221 "abort": true, 00:18:24.221 "nvme_admin": false, 00:18:24.221 "nvme_io": false 00:18:24.221 }, 00:18:24.221 "memory_domains": [ 00:18:24.221 { 00:18:24.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.221 "dma_device_type": 2 00:18:24.221 } 00:18:24.221 ], 00:18:24.221 "driver_specific": {} 00:18:24.221 } 00:18:24.221 ] 00:18:24.221 16:35:55 -- common/autotest_common.sh@895 -- # return 0 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.221 16:35:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.478 16:35:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:24.478 "name": "Existed_Raid", 00:18:24.478 "uuid": "4efc46f6-2d21-45da-8b07-6a788d6487b4", 00:18:24.478 "strip_size_kb": 64, 00:18:24.478 "state": "online", 00:18:24.478 "raid_level": "concat", 00:18:24.478 "superblock": true, 00:18:24.478 "num_base_bdevs": 4, 00:18:24.478 "num_base_bdevs_discovered": 4, 00:18:24.478 "num_base_bdevs_operational": 4, 00:18:24.478 "base_bdevs_list": [ 00:18:24.478 { 00:18:24.478 "name": "BaseBdev1", 00:18:24.478 "uuid": "c7b2358a-d346-4478-815f-3f8660770b2b", 00:18:24.478 "is_configured": true, 00:18:24.478 "data_offset": 2048, 00:18:24.478 "data_size": 63488 00:18:24.478 }, 00:18:24.478 { 00:18:24.478 "name": "BaseBdev2", 00:18:24.478 "uuid": "7515322b-e76e-4170-aaf8-cae612fac7b8", 00:18:24.478 "is_configured": true, 00:18:24.478 "data_offset": 2048, 00:18:24.478 "data_size": 63488 00:18:24.478 }, 00:18:24.478 { 00:18:24.478 "name": "BaseBdev3", 00:18:24.478 "uuid": "c832d0fd-1d2a-4104-b323-1e9d15c9b43d", 00:18:24.478 "is_configured": true, 00:18:24.478 "data_offset": 2048, 00:18:24.478 "data_size": 63488 00:18:24.478 }, 00:18:24.478 { 00:18:24.478 "name": "BaseBdev4", 00:18:24.478 "uuid": "3a08b4a0-2a8a-4f9b-b9e0-13598824f1f8", 00:18:24.478 "is_configured": true, 00:18:24.478 "data_offset": 2048, 00:18:24.478 "data_size": 63488 00:18:24.478 } 00:18:24.478 ] 00:18:24.478 }' 00:18:24.478 16:35:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:24.478 16:35:55 -- common/autotest_common.sh@10 -- # set +x 00:18:25.044 16:35:56 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:25.303 [2024-07-13 16:35:56.643741] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:25.303 [2024-07-13 16:35:56.644079] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:25.303 [2024-07-13 16:35:56.644287] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.303 16:35:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.610 16:35:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.610 "name": "Existed_Raid", 00:18:25.610 "uuid": "4efc46f6-2d21-45da-8b07-6a788d6487b4", 00:18:25.610 "strip_size_kb": 64, 00:18:25.610 "state": "offline", 00:18:25.610 "raid_level": "concat", 00:18:25.610 "superblock": true, 00:18:25.610 "num_base_bdevs": 4, 00:18:25.610 "num_base_bdevs_discovered": 3, 00:18:25.610 "num_base_bdevs_operational": 3, 00:18:25.610 "base_bdevs_list": [ 00:18:25.610 { 00:18:25.610 "name": null, 00:18:25.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.610 "is_configured": false, 00:18:25.610 "data_offset": 2048, 00:18:25.610 "data_size": 63488 00:18:25.610 }, 00:18:25.610 { 00:18:25.610 "name": "BaseBdev2", 00:18:25.610 "uuid": "7515322b-e76e-4170-aaf8-cae612fac7b8", 00:18:25.610 "is_configured": true, 00:18:25.610 "data_offset": 2048, 00:18:25.610 "data_size": 63488 00:18:25.610 }, 00:18:25.610 { 00:18:25.610 "name": "BaseBdev3", 00:18:25.610 "uuid": "c832d0fd-1d2a-4104-b323-1e9d15c9b43d", 00:18:25.610 "is_configured": true, 00:18:25.610 "data_offset": 2048, 00:18:25.610 "data_size": 63488 00:18:25.610 }, 00:18:25.610 { 00:18:25.610 "name": "BaseBdev4", 00:18:25.610 "uuid": "3a08b4a0-2a8a-4f9b-b9e0-13598824f1f8", 00:18:25.610 "is_configured": true, 00:18:25.610 "data_offset": 2048, 00:18:25.610 "data_size": 63488 00:18:25.610 } 00:18:25.610 ] 00:18:25.610 }' 00:18:25.610 16:35:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.610 16:35:56 -- common/autotest_common.sh@10 -- # set +x 00:18:26.175 16:35:57 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:26.175 16:35:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:26.175 16:35:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:26.175 16:35:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.432 16:35:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:26.432 16:35:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:26.432 16:35:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:26.690 [2024-07-13 16:35:58.117156] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:26.948 16:35:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:26.948 16:35:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:26.948 16:35:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.948 16:35:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:27.206 16:35:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:27.206 16:35:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:27.206 16:35:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:27.206 [2024-07-13 16:35:58.663990] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:27.464 16:35:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:27.464 16:35:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:27.464 16:35:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.464 16:35:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:27.464 16:35:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:27.464 16:35:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:27.464 16:35:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:27.722 [2024-07-13 16:35:59.118690] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:27.722 [2024-07-13 16:35:59.119057] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:18:27.722 16:35:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:27.722 16:35:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:27.722 16:35:59 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.722 16:35:59 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:27.980 16:35:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:27.980 16:35:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:27.980 16:35:59 -- bdev/bdev_raid.sh@287 -- # killprocess 130645 00:18:27.980 16:35:59 -- common/autotest_common.sh@926 -- # '[' -z 130645 ']' 00:18:27.980 16:35:59 -- common/autotest_common.sh@930 -- # kill -0 130645 00:18:27.980 16:35:59 -- common/autotest_common.sh@931 -- # uname 00:18:27.980 16:35:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:27.980 16:35:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130645 00:18:27.980 16:35:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:27.980 killing process with pid 130645 00:18:27.980 16:35:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:27.980 16:35:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130645' 00:18:27.980 16:35:59 -- common/autotest_common.sh@945 -- # kill 130645 00:18:27.980 [2024-07-13 16:35:59.409217] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:27.980 16:35:59 -- common/autotest_common.sh@950 -- # wait 130645 00:18:27.980 [2024-07-13 16:35:59.409344] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:28.543 ************************************ 00:18:28.543 END TEST raid_state_function_test_sb 00:18:28.543 ************************************ 00:18:28.543 00:18:28.543 real 0m14.608s 00:18:28.543 user 0m25.853s 00:18:28.543 sys 0m2.653s 00:18:28.543 16:35:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:28.543 16:35:59 -- common/autotest_common.sh@10 -- # set +x 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:28.543 16:35:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:28.543 16:35:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:28.543 16:35:59 -- common/autotest_common.sh@10 -- # set +x 00:18:28.543 ************************************ 00:18:28.543 START TEST raid_superblock_test 00:18:28.543 ************************************ 00:18:28.543 16:35:59 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@357 -- # raid_pid=131092 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131092 /var/tmp/spdk-raid.sock 00:18:28.543 16:35:59 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:28.543 16:35:59 -- common/autotest_common.sh@819 -- # '[' -z 131092 ']' 00:18:28.543 16:35:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:28.543 16:35:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:28.543 16:35:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:28.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:28.543 16:35:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:28.543 16:35:59 -- common/autotest_common.sh@10 -- # set +x 00:18:28.543 [2024-07-13 16:35:59.973911] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:28.543 [2024-07-13 16:35:59.974222] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131092 ] 00:18:28.801 [2024-07-13 16:36:00.125456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.801 [2024-07-13 16:36:00.219703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.059 [2024-07-13 16:36:00.302442] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.625 16:36:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:29.625 16:36:00 -- common/autotest_common.sh@852 -- # return 0 00:18:29.625 16:36:00 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:29.625 16:36:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:29.625 16:36:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:29.625 16:36:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:29.625 16:36:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:29.625 16:36:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:29.625 16:36:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:29.625 16:36:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:29.625 16:36:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:29.883 malloc1 00:18:29.883 16:36:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:30.140 [2024-07-13 16:36:01.466615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:30.140 [2024-07-13 16:36:01.466785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.140 [2024-07-13 16:36:01.466837] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:18:30.140 [2024-07-13 16:36:01.466900] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.140 [2024-07-13 16:36:01.470108] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.140 [2024-07-13 16:36:01.470211] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:30.140 pt1 00:18:30.141 16:36:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:30.141 16:36:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:30.141 16:36:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:30.141 16:36:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:30.141 16:36:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:30.141 16:36:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:30.141 16:36:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:30.141 16:36:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:30.141 16:36:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:30.399 malloc2 00:18:30.399 16:36:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:30.656 [2024-07-13 16:36:01.983415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:30.656 [2024-07-13 16:36:01.983545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.656 [2024-07-13 16:36:01.983610] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:30.656 [2024-07-13 16:36:01.983665] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.656 [2024-07-13 16:36:01.986587] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.656 [2024-07-13 16:36:01.986672] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:30.656 pt2 00:18:30.656 16:36:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:30.656 16:36:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:30.656 16:36:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:30.656 16:36:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:30.657 16:36:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:30.657 16:36:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:30.657 16:36:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:30.657 16:36:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:30.657 16:36:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:30.931 malloc3 00:18:30.931 16:36:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:31.227 [2024-07-13 16:36:02.517139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:31.227 [2024-07-13 16:36:02.517278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.227 [2024-07-13 16:36:02.517334] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:31.227 [2024-07-13 16:36:02.517391] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.227 [2024-07-13 16:36:02.520499] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.227 [2024-07-13 16:36:02.520582] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:31.227 pt3 00:18:31.227 16:36:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:31.227 16:36:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:31.227 16:36:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:31.227 16:36:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:31.227 16:36:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:31.227 16:36:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:31.227 16:36:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:31.227 16:36:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:31.227 16:36:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:31.484 malloc4 00:18:31.484 16:36:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:31.741 [2024-07-13 16:36:02.994506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:31.741 [2024-07-13 16:36:02.994671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.741 [2024-07-13 16:36:02.994723] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:31.741 [2024-07-13 16:36:02.994786] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.741 [2024-07-13 16:36:02.998008] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.741 [2024-07-13 16:36:02.998096] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:31.741 pt4 00:18:31.741 16:36:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:31.741 16:36:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:31.741 16:36:03 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:31.998 [2024-07-13 16:36:03.214709] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:31.998 [2024-07-13 16:36:03.217483] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:31.998 [2024-07-13 16:36:03.217571] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:31.998 [2024-07-13 16:36:03.217615] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:31.998 [2024-07-13 16:36:03.217859] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:18:31.998 [2024-07-13 16:36:03.217875] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:31.998 [2024-07-13 16:36:03.218088] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:31.998 [2024-07-13 16:36:03.218570] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:18:31.998 [2024-07-13 16:36:03.218592] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:18:31.998 [2024-07-13 16:36:03.218836] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.998 16:36:03 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:31.998 16:36:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:31.998 16:36:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:31.998 16:36:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:31.998 16:36:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:31.998 16:36:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:31.998 16:36:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.998 16:36:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.998 16:36:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.998 16:36:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.998 16:36:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.998 16:36:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.998 16:36:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:31.998 "name": "raid_bdev1", 00:18:31.999 "uuid": "781e4f51-aa04-48ac-a440-b4ede60ee988", 00:18:31.999 "strip_size_kb": 64, 00:18:31.999 "state": "online", 00:18:31.999 "raid_level": "concat", 00:18:31.999 "superblock": true, 00:18:31.999 "num_base_bdevs": 4, 00:18:31.999 "num_base_bdevs_discovered": 4, 00:18:31.999 "num_base_bdevs_operational": 4, 00:18:31.999 "base_bdevs_list": [ 00:18:31.999 { 00:18:31.999 "name": "pt1", 00:18:31.999 "uuid": "147692d6-3282-5c22-8102-f15b1869e057", 00:18:31.999 "is_configured": true, 00:18:31.999 "data_offset": 2048, 00:18:31.999 "data_size": 63488 00:18:31.999 }, 00:18:31.999 { 00:18:31.999 "name": "pt2", 00:18:31.999 "uuid": "0bf85bd1-236f-5a4b-9cb3-49d06940769c", 00:18:31.999 "is_configured": true, 00:18:31.999 "data_offset": 2048, 00:18:31.999 "data_size": 63488 00:18:31.999 }, 00:18:31.999 { 00:18:31.999 "name": "pt3", 00:18:31.999 "uuid": "8e0d8ea2-9f8f-5beb-8428-006fbdcb09e7", 00:18:31.999 "is_configured": true, 00:18:31.999 "data_offset": 2048, 00:18:31.999 "data_size": 63488 00:18:31.999 }, 00:18:31.999 { 00:18:31.999 "name": "pt4", 00:18:31.999 "uuid": "c3727e07-03ed-52f1-bc1f-3aeb862b30ff", 00:18:31.999 "is_configured": true, 00:18:31.999 "data_offset": 2048, 00:18:31.999 "data_size": 63488 00:18:31.999 } 00:18:31.999 ] 00:18:31.999 }' 00:18:31.999 16:36:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:31.999 16:36:03 -- common/autotest_common.sh@10 -- # set +x 00:18:32.931 16:36:04 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:32.931 16:36:04 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:32.931 [2024-07-13 16:36:04.399470] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:33.188 16:36:04 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=781e4f51-aa04-48ac-a440-b4ede60ee988 00:18:33.188 16:36:04 -- bdev/bdev_raid.sh@380 -- # '[' -z 781e4f51-aa04-48ac-a440-b4ede60ee988 ']' 00:18:33.188 16:36:04 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:33.445 [2024-07-13 16:36:04.663241] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.445 [2024-07-13 16:36:04.663296] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.445 [2024-07-13 16:36:04.663414] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.445 [2024-07-13 16:36:04.663508] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.445 [2024-07-13 16:36:04.663519] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:18:33.445 16:36:04 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.445 16:36:04 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:33.445 16:36:04 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:33.445 16:36:04 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:33.445 16:36:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:33.445 16:36:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:33.703 16:36:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:33.703 16:36:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:33.961 16:36:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:33.961 16:36:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:34.219 16:36:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:34.219 16:36:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:34.476 16:36:05 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:34.476 16:36:05 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:34.734 16:36:06 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:34.734 16:36:06 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:34.734 16:36:06 -- common/autotest_common.sh@640 -- # local es=0 00:18:34.734 16:36:06 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:34.734 16:36:06 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.734 16:36:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:34.734 16:36:06 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.734 16:36:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:34.734 16:36:06 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.734 16:36:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:34.734 16:36:06 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.734 16:36:06 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:34.734 16:36:06 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:34.992 [2024-07-13 16:36:06.335522] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:34.992 [2024-07-13 16:36:06.338234] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:34.992 [2024-07-13 16:36:06.338306] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:34.992 [2024-07-13 16:36:06.338336] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:34.992 [2024-07-13 16:36:06.338391] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:34.992 [2024-07-13 16:36:06.338472] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:34.992 [2024-07-13 16:36:06.338501] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:34.992 [2024-07-13 16:36:06.338552] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:34.992 [2024-07-13 16:36:06.338619] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.992 [2024-07-13 16:36:06.338631] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:18:34.992 request: 00:18:34.992 { 00:18:34.992 "name": "raid_bdev1", 00:18:34.992 "raid_level": "concat", 00:18:34.992 "base_bdevs": [ 00:18:34.992 "malloc1", 00:18:34.992 "malloc2", 00:18:34.992 "malloc3", 00:18:34.992 "malloc4" 00:18:34.992 ], 00:18:34.992 "superblock": false, 00:18:34.992 "strip_size_kb": 64, 00:18:34.992 "method": "bdev_raid_create", 00:18:34.992 "req_id": 1 00:18:34.992 } 00:18:34.992 Got JSON-RPC error response 00:18:34.992 response: 00:18:34.992 { 00:18:34.992 "code": -17, 00:18:34.992 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:34.992 } 00:18:34.992 16:36:06 -- common/autotest_common.sh@643 -- # es=1 00:18:34.992 16:36:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:34.992 16:36:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:34.992 16:36:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:34.992 16:36:06 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.992 16:36:06 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:35.251 16:36:06 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:35.251 16:36:06 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:35.251 16:36:06 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:35.509 [2024-07-13 16:36:06.811489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:35.509 [2024-07-13 16:36:06.811614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.509 [2024-07-13 16:36:06.811657] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:35.509 [2024-07-13 16:36:06.811687] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.509 [2024-07-13 16:36:06.814683] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.509 [2024-07-13 16:36:06.814765] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:35.509 [2024-07-13 16:36:06.814873] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:35.509 [2024-07-13 16:36:06.814954] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:35.509 pt1 00:18:35.509 16:36:06 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:35.509 16:36:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:35.509 16:36:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:35.509 16:36:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:35.509 16:36:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:35.509 16:36:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:35.509 16:36:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.509 16:36:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.509 16:36:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.509 16:36:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.509 16:36:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.509 16:36:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.767 16:36:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.767 "name": "raid_bdev1", 00:18:35.767 "uuid": "781e4f51-aa04-48ac-a440-b4ede60ee988", 00:18:35.767 "strip_size_kb": 64, 00:18:35.767 "state": "configuring", 00:18:35.767 "raid_level": "concat", 00:18:35.767 "superblock": true, 00:18:35.767 "num_base_bdevs": 4, 00:18:35.767 "num_base_bdevs_discovered": 1, 00:18:35.767 "num_base_bdevs_operational": 4, 00:18:35.767 "base_bdevs_list": [ 00:18:35.767 { 00:18:35.767 "name": "pt1", 00:18:35.767 "uuid": "147692d6-3282-5c22-8102-f15b1869e057", 00:18:35.767 "is_configured": true, 00:18:35.767 "data_offset": 2048, 00:18:35.767 "data_size": 63488 00:18:35.767 }, 00:18:35.767 { 00:18:35.767 "name": null, 00:18:35.767 "uuid": "0bf85bd1-236f-5a4b-9cb3-49d06940769c", 00:18:35.767 "is_configured": false, 00:18:35.767 "data_offset": 2048, 00:18:35.767 "data_size": 63488 00:18:35.767 }, 00:18:35.767 { 00:18:35.767 "name": null, 00:18:35.767 "uuid": "8e0d8ea2-9f8f-5beb-8428-006fbdcb09e7", 00:18:35.767 "is_configured": false, 00:18:35.767 "data_offset": 2048, 00:18:35.767 "data_size": 63488 00:18:35.767 }, 00:18:35.767 { 00:18:35.767 "name": null, 00:18:35.767 "uuid": "c3727e07-03ed-52f1-bc1f-3aeb862b30ff", 00:18:35.767 "is_configured": false, 00:18:35.767 "data_offset": 2048, 00:18:35.767 "data_size": 63488 00:18:35.767 } 00:18:35.767 ] 00:18:35.767 }' 00:18:35.767 16:36:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.767 16:36:07 -- common/autotest_common.sh@10 -- # set +x 00:18:36.335 16:36:07 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:36.335 16:36:07 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:36.594 [2024-07-13 16:36:07.967753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:36.594 [2024-07-13 16:36:07.967904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.594 [2024-07-13 16:36:07.967960] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:36.594 [2024-07-13 16:36:07.967986] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.594 [2024-07-13 16:36:07.968522] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.594 [2024-07-13 16:36:07.968585] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:36.594 [2024-07-13 16:36:07.968697] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:36.594 [2024-07-13 16:36:07.968740] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:36.594 pt2 00:18:36.594 16:36:07 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:36.858 [2024-07-13 16:36:08.179832] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:36.858 16:36:08 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:36.858 16:36:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:36.858 16:36:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:36.858 16:36:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:36.858 16:36:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:36.858 16:36:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:36.858 16:36:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.858 16:36:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.858 16:36:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.858 16:36:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.858 16:36:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.858 16:36:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.114 16:36:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:37.114 "name": "raid_bdev1", 00:18:37.114 "uuid": "781e4f51-aa04-48ac-a440-b4ede60ee988", 00:18:37.114 "strip_size_kb": 64, 00:18:37.114 "state": "configuring", 00:18:37.114 "raid_level": "concat", 00:18:37.114 "superblock": true, 00:18:37.114 "num_base_bdevs": 4, 00:18:37.114 "num_base_bdevs_discovered": 1, 00:18:37.114 "num_base_bdevs_operational": 4, 00:18:37.114 "base_bdevs_list": [ 00:18:37.114 { 00:18:37.114 "name": "pt1", 00:18:37.114 "uuid": "147692d6-3282-5c22-8102-f15b1869e057", 00:18:37.114 "is_configured": true, 00:18:37.114 "data_offset": 2048, 00:18:37.114 "data_size": 63488 00:18:37.114 }, 00:18:37.114 { 00:18:37.114 "name": null, 00:18:37.114 "uuid": "0bf85bd1-236f-5a4b-9cb3-49d06940769c", 00:18:37.114 "is_configured": false, 00:18:37.114 "data_offset": 2048, 00:18:37.114 "data_size": 63488 00:18:37.114 }, 00:18:37.114 { 00:18:37.114 "name": null, 00:18:37.114 "uuid": "8e0d8ea2-9f8f-5beb-8428-006fbdcb09e7", 00:18:37.114 "is_configured": false, 00:18:37.114 "data_offset": 2048, 00:18:37.114 "data_size": 63488 00:18:37.114 }, 00:18:37.114 { 00:18:37.114 "name": null, 00:18:37.114 "uuid": "c3727e07-03ed-52f1-bc1f-3aeb862b30ff", 00:18:37.114 "is_configured": false, 00:18:37.114 "data_offset": 2048, 00:18:37.114 "data_size": 63488 00:18:37.114 } 00:18:37.114 ] 00:18:37.114 }' 00:18:37.114 16:36:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:37.114 16:36:08 -- common/autotest_common.sh@10 -- # set +x 00:18:37.678 16:36:09 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:37.678 16:36:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:37.678 16:36:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:37.935 [2024-07-13 16:36:09.303987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:37.935 [2024-07-13 16:36:09.304094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.935 [2024-07-13 16:36:09.304145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:37.935 [2024-07-13 16:36:09.304175] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.935 [2024-07-13 16:36:09.304741] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.935 [2024-07-13 16:36:09.304799] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:37.935 [2024-07-13 16:36:09.304893] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:37.935 [2024-07-13 16:36:09.304917] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.935 pt2 00:18:37.935 16:36:09 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:37.935 16:36:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:37.935 16:36:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:38.192 [2024-07-13 16:36:09.572071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:38.192 [2024-07-13 16:36:09.572187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.192 [2024-07-13 16:36:09.572225] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:38.192 [2024-07-13 16:36:09.572266] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.192 [2024-07-13 16:36:09.572774] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.192 [2024-07-13 16:36:09.572832] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:38.192 [2024-07-13 16:36:09.572911] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:38.192 [2024-07-13 16:36:09.572932] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:38.192 pt3 00:18:38.192 16:36:09 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:38.192 16:36:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:38.192 16:36:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:38.449 [2024-07-13 16:36:09.844119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:38.449 [2024-07-13 16:36:09.844225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.449 [2024-07-13 16:36:09.844276] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:38.449 [2024-07-13 16:36:09.844325] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.449 [2024-07-13 16:36:09.844809] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.449 [2024-07-13 16:36:09.844863] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:38.449 [2024-07-13 16:36:09.844947] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:38.449 [2024-07-13 16:36:09.844968] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:38.449 [2024-07-13 16:36:09.845097] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:18:38.449 [2024-07-13 16:36:09.845107] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:38.449 [2024-07-13 16:36:09.845194] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:18:38.449 [2024-07-13 16:36:09.845554] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:18:38.449 [2024-07-13 16:36:09.845565] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:18:38.449 [2024-07-13 16:36:09.845667] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.449 pt4 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.449 16:36:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.706 16:36:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:38.707 "name": "raid_bdev1", 00:18:38.707 "uuid": "781e4f51-aa04-48ac-a440-b4ede60ee988", 00:18:38.707 "strip_size_kb": 64, 00:18:38.707 "state": "online", 00:18:38.707 "raid_level": "concat", 00:18:38.707 "superblock": true, 00:18:38.707 "num_base_bdevs": 4, 00:18:38.707 "num_base_bdevs_discovered": 4, 00:18:38.707 "num_base_bdevs_operational": 4, 00:18:38.707 "base_bdevs_list": [ 00:18:38.707 { 00:18:38.707 "name": "pt1", 00:18:38.707 "uuid": "147692d6-3282-5c22-8102-f15b1869e057", 00:18:38.707 "is_configured": true, 00:18:38.707 "data_offset": 2048, 00:18:38.707 "data_size": 63488 00:18:38.707 }, 00:18:38.707 { 00:18:38.707 "name": "pt2", 00:18:38.707 "uuid": "0bf85bd1-236f-5a4b-9cb3-49d06940769c", 00:18:38.707 "is_configured": true, 00:18:38.707 "data_offset": 2048, 00:18:38.707 "data_size": 63488 00:18:38.707 }, 00:18:38.707 { 00:18:38.707 "name": "pt3", 00:18:38.707 "uuid": "8e0d8ea2-9f8f-5beb-8428-006fbdcb09e7", 00:18:38.707 "is_configured": true, 00:18:38.707 "data_offset": 2048, 00:18:38.707 "data_size": 63488 00:18:38.707 }, 00:18:38.707 { 00:18:38.707 "name": "pt4", 00:18:38.707 "uuid": "c3727e07-03ed-52f1-bc1f-3aeb862b30ff", 00:18:38.707 "is_configured": true, 00:18:38.707 "data_offset": 2048, 00:18:38.707 "data_size": 63488 00:18:38.707 } 00:18:38.707 ] 00:18:38.707 }' 00:18:38.707 16:36:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:38.707 16:36:10 -- common/autotest_common.sh@10 -- # set +x 00:18:39.272 16:36:10 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:39.272 16:36:10 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:39.837 [2024-07-13 16:36:11.004584] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.837 16:36:11 -- bdev/bdev_raid.sh@430 -- # '[' 781e4f51-aa04-48ac-a440-b4ede60ee988 '!=' 781e4f51-aa04-48ac-a440-b4ede60ee988 ']' 00:18:39.837 16:36:11 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:18:39.837 16:36:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:39.837 16:36:11 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:39.837 16:36:11 -- bdev/bdev_raid.sh@511 -- # killprocess 131092 00:18:39.837 16:36:11 -- common/autotest_common.sh@926 -- # '[' -z 131092 ']' 00:18:39.837 16:36:11 -- common/autotest_common.sh@930 -- # kill -0 131092 00:18:39.837 16:36:11 -- common/autotest_common.sh@931 -- # uname 00:18:39.837 16:36:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:39.837 16:36:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131092 00:18:39.837 16:36:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:39.837 16:36:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:39.837 16:36:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131092' 00:18:39.837 killing process with pid 131092 00:18:39.837 16:36:11 -- common/autotest_common.sh@945 -- # kill 131092 00:18:39.837 16:36:11 -- common/autotest_common.sh@950 -- # wait 131092 00:18:39.837 [2024-07-13 16:36:11.069521] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:39.837 [2024-07-13 16:36:11.069654] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.837 [2024-07-13 16:36:11.069774] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.837 [2024-07-13 16:36:11.069798] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:39.837 [2024-07-13 16:36:11.158434] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:40.402 00:18:40.402 real 0m11.672s 00:18:40.402 user 0m20.344s 00:18:40.402 sys 0m2.160s 00:18:40.402 16:36:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:40.402 16:36:11 -- common/autotest_common.sh@10 -- # set +x 00:18:40.402 ************************************ 00:18:40.402 END TEST raid_superblock_test 00:18:40.402 ************************************ 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:40.402 16:36:11 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:40.402 16:36:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:40.402 16:36:11 -- common/autotest_common.sh@10 -- # set +x 00:18:40.402 ************************************ 00:18:40.402 START TEST raid_state_function_test 00:18:40.402 ************************************ 00:18:40.402 16:36:11 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=131420 00:18:40.402 Process raid pid: 131420 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131420' 00:18:40.402 16:36:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131420 /var/tmp/spdk-raid.sock 00:18:40.402 16:36:11 -- common/autotest_common.sh@819 -- # '[' -z 131420 ']' 00:18:40.402 16:36:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:40.402 16:36:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:40.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:40.402 16:36:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:40.403 16:36:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:40.403 16:36:11 -- common/autotest_common.sh@10 -- # set +x 00:18:40.403 [2024-07-13 16:36:11.725424] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:40.403 [2024-07-13 16:36:11.726426] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.660 [2024-07-13 16:36:11.884874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.660 [2024-07-13 16:36:11.979341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.660 [2024-07-13 16:36:12.060495] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.592 16:36:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:41.592 16:36:12 -- common/autotest_common.sh@852 -- # return 0 00:18:41.592 16:36:12 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:41.592 [2024-07-13 16:36:12.943937] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:41.592 [2024-07-13 16:36:12.944120] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:41.592 [2024-07-13 16:36:12.944180] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.592 [2024-07-13 16:36:12.944230] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.592 [2024-07-13 16:36:12.944260] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:41.592 [2024-07-13 16:36:12.944373] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:41.592 [2024-07-13 16:36:12.944407] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:41.592 [2024-07-13 16:36:12.944463] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:41.592 16:36:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:41.592 16:36:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:41.592 16:36:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:41.592 16:36:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:41.592 16:36:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:41.592 16:36:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:41.592 16:36:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.592 16:36:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.592 16:36:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.592 16:36:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.592 16:36:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.592 16:36:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.849 16:36:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.849 "name": "Existed_Raid", 00:18:41.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.849 "strip_size_kb": 0, 00:18:41.849 "state": "configuring", 00:18:41.849 "raid_level": "raid1", 00:18:41.849 "superblock": false, 00:18:41.849 "num_base_bdevs": 4, 00:18:41.849 "num_base_bdevs_discovered": 0, 00:18:41.849 "num_base_bdevs_operational": 4, 00:18:41.849 "base_bdevs_list": [ 00:18:41.849 { 00:18:41.849 "name": "BaseBdev1", 00:18:41.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.849 "is_configured": false, 00:18:41.849 "data_offset": 0, 00:18:41.849 "data_size": 0 00:18:41.849 }, 00:18:41.849 { 00:18:41.849 "name": "BaseBdev2", 00:18:41.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.849 "is_configured": false, 00:18:41.849 "data_offset": 0, 00:18:41.849 "data_size": 0 00:18:41.849 }, 00:18:41.849 { 00:18:41.849 "name": "BaseBdev3", 00:18:41.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.849 "is_configured": false, 00:18:41.849 "data_offset": 0, 00:18:41.850 "data_size": 0 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "name": "BaseBdev4", 00:18:41.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.850 "is_configured": false, 00:18:41.850 "data_offset": 0, 00:18:41.850 "data_size": 0 00:18:41.850 } 00:18:41.850 ] 00:18:41.850 }' 00:18:41.850 16:36:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.850 16:36:13 -- common/autotest_common.sh@10 -- # set +x 00:18:42.414 16:36:13 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:42.671 [2024-07-13 16:36:14.063943] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:42.671 [2024-07-13 16:36:14.064152] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:42.671 16:36:14 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:42.929 [2024-07-13 16:36:14.336070] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:42.929 [2024-07-13 16:36:14.336451] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:42.929 [2024-07-13 16:36:14.336556] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:42.929 [2024-07-13 16:36:14.336622] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:42.929 [2024-07-13 16:36:14.336668] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:42.929 [2024-07-13 16:36:14.336814] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:42.929 [2024-07-13 16:36:14.336851] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:42.929 [2024-07-13 16:36:14.336905] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:42.929 16:36:14 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:43.187 [2024-07-13 16:36:14.560915] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:43.187 BaseBdev1 00:18:43.187 16:36:14 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:43.187 16:36:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:43.187 16:36:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:43.187 16:36:14 -- common/autotest_common.sh@889 -- # local i 00:18:43.187 16:36:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:43.187 16:36:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:43.187 16:36:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:43.444 16:36:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:43.700 [ 00:18:43.700 { 00:18:43.700 "name": "BaseBdev1", 00:18:43.700 "aliases": [ 00:18:43.700 "2743d801-e6a1-4c80-9e86-2d57d8c49f79" 00:18:43.700 ], 00:18:43.700 "product_name": "Malloc disk", 00:18:43.700 "block_size": 512, 00:18:43.700 "num_blocks": 65536, 00:18:43.700 "uuid": "2743d801-e6a1-4c80-9e86-2d57d8c49f79", 00:18:43.700 "assigned_rate_limits": { 00:18:43.700 "rw_ios_per_sec": 0, 00:18:43.700 "rw_mbytes_per_sec": 0, 00:18:43.700 "r_mbytes_per_sec": 0, 00:18:43.701 "w_mbytes_per_sec": 0 00:18:43.701 }, 00:18:43.701 "claimed": true, 00:18:43.701 "claim_type": "exclusive_write", 00:18:43.701 "zoned": false, 00:18:43.701 "supported_io_types": { 00:18:43.701 "read": true, 00:18:43.701 "write": true, 00:18:43.701 "unmap": true, 00:18:43.701 "write_zeroes": true, 00:18:43.701 "flush": true, 00:18:43.701 "reset": true, 00:18:43.701 "compare": false, 00:18:43.701 "compare_and_write": false, 00:18:43.701 "abort": true, 00:18:43.701 "nvme_admin": false, 00:18:43.701 "nvme_io": false 00:18:43.701 }, 00:18:43.701 "memory_domains": [ 00:18:43.701 { 00:18:43.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.701 "dma_device_type": 2 00:18:43.701 } 00:18:43.701 ], 00:18:43.701 "driver_specific": {} 00:18:43.701 } 00:18:43.701 ] 00:18:43.701 16:36:15 -- common/autotest_common.sh@895 -- # return 0 00:18:43.701 16:36:15 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:43.701 16:36:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:43.701 16:36:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:43.701 16:36:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:43.701 16:36:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:43.701 16:36:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:43.701 16:36:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.701 16:36:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.701 16:36:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.701 16:36:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.701 16:36:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.701 16:36:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.958 16:36:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.958 "name": "Existed_Raid", 00:18:43.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.958 "strip_size_kb": 0, 00:18:43.958 "state": "configuring", 00:18:43.958 "raid_level": "raid1", 00:18:43.958 "superblock": false, 00:18:43.958 "num_base_bdevs": 4, 00:18:43.958 "num_base_bdevs_discovered": 1, 00:18:43.958 "num_base_bdevs_operational": 4, 00:18:43.958 "base_bdevs_list": [ 00:18:43.958 { 00:18:43.958 "name": "BaseBdev1", 00:18:43.958 "uuid": "2743d801-e6a1-4c80-9e86-2d57d8c49f79", 00:18:43.958 "is_configured": true, 00:18:43.958 "data_offset": 0, 00:18:43.958 "data_size": 65536 00:18:43.958 }, 00:18:43.958 { 00:18:43.958 "name": "BaseBdev2", 00:18:43.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.958 "is_configured": false, 00:18:43.958 "data_offset": 0, 00:18:43.958 "data_size": 0 00:18:43.958 }, 00:18:43.958 { 00:18:43.958 "name": "BaseBdev3", 00:18:43.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.958 "is_configured": false, 00:18:43.958 "data_offset": 0, 00:18:43.958 "data_size": 0 00:18:43.958 }, 00:18:43.958 { 00:18:43.958 "name": "BaseBdev4", 00:18:43.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.958 "is_configured": false, 00:18:43.958 "data_offset": 0, 00:18:43.958 "data_size": 0 00:18:43.958 } 00:18:43.958 ] 00:18:43.958 }' 00:18:43.958 16:36:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.958 16:36:15 -- common/autotest_common.sh@10 -- # set +x 00:18:44.523 16:36:15 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:44.781 [2024-07-13 16:36:16.125317] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:44.781 [2024-07-13 16:36:16.125663] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:44.781 16:36:16 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:44.781 16:36:16 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:45.040 [2024-07-13 16:36:16.401768] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.040 [2024-07-13 16:36:16.404771] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:45.040 [2024-07-13 16:36:16.405041] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:45.040 [2024-07-13 16:36:16.405162] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:45.040 [2024-07-13 16:36:16.405231] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:45.040 [2024-07-13 16:36:16.405263] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:45.040 [2024-07-13 16:36:16.405363] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.040 16:36:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.299 16:36:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.299 "name": "Existed_Raid", 00:18:45.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.299 "strip_size_kb": 0, 00:18:45.299 "state": "configuring", 00:18:45.299 "raid_level": "raid1", 00:18:45.299 "superblock": false, 00:18:45.299 "num_base_bdevs": 4, 00:18:45.299 "num_base_bdevs_discovered": 1, 00:18:45.299 "num_base_bdevs_operational": 4, 00:18:45.299 "base_bdevs_list": [ 00:18:45.299 { 00:18:45.299 "name": "BaseBdev1", 00:18:45.299 "uuid": "2743d801-e6a1-4c80-9e86-2d57d8c49f79", 00:18:45.299 "is_configured": true, 00:18:45.299 "data_offset": 0, 00:18:45.299 "data_size": 65536 00:18:45.299 }, 00:18:45.299 { 00:18:45.299 "name": "BaseBdev2", 00:18:45.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.299 "is_configured": false, 00:18:45.299 "data_offset": 0, 00:18:45.299 "data_size": 0 00:18:45.299 }, 00:18:45.299 { 00:18:45.299 "name": "BaseBdev3", 00:18:45.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.299 "is_configured": false, 00:18:45.299 "data_offset": 0, 00:18:45.299 "data_size": 0 00:18:45.299 }, 00:18:45.299 { 00:18:45.299 "name": "BaseBdev4", 00:18:45.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.299 "is_configured": false, 00:18:45.299 "data_offset": 0, 00:18:45.299 "data_size": 0 00:18:45.299 } 00:18:45.299 ] 00:18:45.299 }' 00:18:45.299 16:36:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.299 16:36:16 -- common/autotest_common.sh@10 -- # set +x 00:18:45.865 16:36:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:46.431 [2024-07-13 16:36:17.602400] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.431 BaseBdev2 00:18:46.431 16:36:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:46.431 16:36:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:46.431 16:36:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:46.431 16:36:17 -- common/autotest_common.sh@889 -- # local i 00:18:46.431 16:36:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:46.431 16:36:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:46.431 16:36:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:46.689 16:36:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:46.689 [ 00:18:46.689 { 00:18:46.689 "name": "BaseBdev2", 00:18:46.689 "aliases": [ 00:18:46.689 "48aad8e7-d4af-4ea1-9a70-ef5eaf1fc1f9" 00:18:46.689 ], 00:18:46.689 "product_name": "Malloc disk", 00:18:46.689 "block_size": 512, 00:18:46.689 "num_blocks": 65536, 00:18:46.689 "uuid": "48aad8e7-d4af-4ea1-9a70-ef5eaf1fc1f9", 00:18:46.689 "assigned_rate_limits": { 00:18:46.689 "rw_ios_per_sec": 0, 00:18:46.689 "rw_mbytes_per_sec": 0, 00:18:46.689 "r_mbytes_per_sec": 0, 00:18:46.689 "w_mbytes_per_sec": 0 00:18:46.689 }, 00:18:46.689 "claimed": true, 00:18:46.689 "claim_type": "exclusive_write", 00:18:46.689 "zoned": false, 00:18:46.689 "supported_io_types": { 00:18:46.689 "read": true, 00:18:46.689 "write": true, 00:18:46.689 "unmap": true, 00:18:46.689 "write_zeroes": true, 00:18:46.689 "flush": true, 00:18:46.689 "reset": true, 00:18:46.689 "compare": false, 00:18:46.689 "compare_and_write": false, 00:18:46.689 "abort": true, 00:18:46.689 "nvme_admin": false, 00:18:46.689 "nvme_io": false 00:18:46.689 }, 00:18:46.689 "memory_domains": [ 00:18:46.689 { 00:18:46.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.689 "dma_device_type": 2 00:18:46.689 } 00:18:46.689 ], 00:18:46.689 "driver_specific": {} 00:18:46.689 } 00:18:46.689 ] 00:18:46.689 16:36:18 -- common/autotest_common.sh@895 -- # return 0 00:18:46.689 16:36:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:46.689 16:36:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:46.690 16:36:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:46.690 16:36:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:46.690 16:36:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.690 16:36:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:46.690 16:36:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:46.690 16:36:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:46.690 16:36:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.690 16:36:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.690 16:36:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.690 16:36:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.690 16:36:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.690 16:36:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.948 16:36:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:46.948 "name": "Existed_Raid", 00:18:46.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.948 "strip_size_kb": 0, 00:18:46.948 "state": "configuring", 00:18:46.948 "raid_level": "raid1", 00:18:46.948 "superblock": false, 00:18:46.948 "num_base_bdevs": 4, 00:18:46.948 "num_base_bdevs_discovered": 2, 00:18:46.948 "num_base_bdevs_operational": 4, 00:18:46.948 "base_bdevs_list": [ 00:18:46.948 { 00:18:46.948 "name": "BaseBdev1", 00:18:46.948 "uuid": "2743d801-e6a1-4c80-9e86-2d57d8c49f79", 00:18:46.948 "is_configured": true, 00:18:46.949 "data_offset": 0, 00:18:46.949 "data_size": 65536 00:18:46.949 }, 00:18:46.949 { 00:18:46.949 "name": "BaseBdev2", 00:18:46.949 "uuid": "48aad8e7-d4af-4ea1-9a70-ef5eaf1fc1f9", 00:18:46.949 "is_configured": true, 00:18:46.949 "data_offset": 0, 00:18:46.949 "data_size": 65536 00:18:46.949 }, 00:18:46.949 { 00:18:46.949 "name": "BaseBdev3", 00:18:46.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.949 "is_configured": false, 00:18:46.949 "data_offset": 0, 00:18:46.949 "data_size": 0 00:18:46.949 }, 00:18:46.949 { 00:18:46.949 "name": "BaseBdev4", 00:18:46.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.949 "is_configured": false, 00:18:46.949 "data_offset": 0, 00:18:46.949 "data_size": 0 00:18:46.949 } 00:18:46.949 ] 00:18:46.949 }' 00:18:46.949 16:36:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:46.949 16:36:18 -- common/autotest_common.sh@10 -- # set +x 00:18:47.882 16:36:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:47.882 [2024-07-13 16:36:19.328908] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:47.882 BaseBdev3 00:18:47.882 16:36:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:47.882 16:36:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:47.882 16:36:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:47.882 16:36:19 -- common/autotest_common.sh@889 -- # local i 00:18:47.882 16:36:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:47.882 16:36:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:47.882 16:36:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:48.445 16:36:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:48.445 [ 00:18:48.445 { 00:18:48.445 "name": "BaseBdev3", 00:18:48.445 "aliases": [ 00:18:48.445 "8ecb59de-1478-4582-9131-2002512309af" 00:18:48.445 ], 00:18:48.445 "product_name": "Malloc disk", 00:18:48.445 "block_size": 512, 00:18:48.445 "num_blocks": 65536, 00:18:48.445 "uuid": "8ecb59de-1478-4582-9131-2002512309af", 00:18:48.445 "assigned_rate_limits": { 00:18:48.445 "rw_ios_per_sec": 0, 00:18:48.445 "rw_mbytes_per_sec": 0, 00:18:48.445 "r_mbytes_per_sec": 0, 00:18:48.445 "w_mbytes_per_sec": 0 00:18:48.445 }, 00:18:48.445 "claimed": true, 00:18:48.445 "claim_type": "exclusive_write", 00:18:48.445 "zoned": false, 00:18:48.445 "supported_io_types": { 00:18:48.445 "read": true, 00:18:48.445 "write": true, 00:18:48.445 "unmap": true, 00:18:48.445 "write_zeroes": true, 00:18:48.445 "flush": true, 00:18:48.445 "reset": true, 00:18:48.445 "compare": false, 00:18:48.445 "compare_and_write": false, 00:18:48.445 "abort": true, 00:18:48.445 "nvme_admin": false, 00:18:48.445 "nvme_io": false 00:18:48.445 }, 00:18:48.445 "memory_domains": [ 00:18:48.445 { 00:18:48.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.445 "dma_device_type": 2 00:18:48.445 } 00:18:48.445 ], 00:18:48.445 "driver_specific": {} 00:18:48.445 } 00:18:48.445 ] 00:18:48.445 16:36:19 -- common/autotest_common.sh@895 -- # return 0 00:18:48.445 16:36:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:48.445 16:36:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:48.445 16:36:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:48.445 16:36:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:48.445 16:36:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:48.445 16:36:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:48.445 16:36:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:48.445 16:36:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:48.446 16:36:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.446 16:36:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.446 16:36:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.446 16:36:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.446 16:36:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.446 16:36:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.703 16:36:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.703 "name": "Existed_Raid", 00:18:48.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.703 "strip_size_kb": 0, 00:18:48.703 "state": "configuring", 00:18:48.703 "raid_level": "raid1", 00:18:48.703 "superblock": false, 00:18:48.703 "num_base_bdevs": 4, 00:18:48.703 "num_base_bdevs_discovered": 3, 00:18:48.703 "num_base_bdevs_operational": 4, 00:18:48.703 "base_bdevs_list": [ 00:18:48.703 { 00:18:48.703 "name": "BaseBdev1", 00:18:48.703 "uuid": "2743d801-e6a1-4c80-9e86-2d57d8c49f79", 00:18:48.703 "is_configured": true, 00:18:48.703 "data_offset": 0, 00:18:48.703 "data_size": 65536 00:18:48.703 }, 00:18:48.703 { 00:18:48.703 "name": "BaseBdev2", 00:18:48.703 "uuid": "48aad8e7-d4af-4ea1-9a70-ef5eaf1fc1f9", 00:18:48.703 "is_configured": true, 00:18:48.703 "data_offset": 0, 00:18:48.703 "data_size": 65536 00:18:48.703 }, 00:18:48.703 { 00:18:48.703 "name": "BaseBdev3", 00:18:48.703 "uuid": "8ecb59de-1478-4582-9131-2002512309af", 00:18:48.703 "is_configured": true, 00:18:48.703 "data_offset": 0, 00:18:48.703 "data_size": 65536 00:18:48.703 }, 00:18:48.703 { 00:18:48.703 "name": "BaseBdev4", 00:18:48.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.703 "is_configured": false, 00:18:48.703 "data_offset": 0, 00:18:48.703 "data_size": 0 00:18:48.703 } 00:18:48.703 ] 00:18:48.703 }' 00:18:48.703 16:36:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.703 16:36:20 -- common/autotest_common.sh@10 -- # set +x 00:18:49.268 16:36:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:49.526 [2024-07-13 16:36:20.959181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:49.526 [2024-07-13 16:36:20.959272] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:18:49.526 [2024-07-13 16:36:20.959282] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:49.526 [2024-07-13 16:36:20.959436] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:18:49.526 [2024-07-13 16:36:20.959870] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:18:49.526 [2024-07-13 16:36:20.959883] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:18:49.526 [2024-07-13 16:36:20.960151] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.526 BaseBdev4 00:18:49.526 16:36:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:49.526 16:36:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:49.526 16:36:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:49.526 16:36:20 -- common/autotest_common.sh@889 -- # local i 00:18:49.526 16:36:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:49.526 16:36:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:49.526 16:36:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:50.091 16:36:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:50.091 [ 00:18:50.091 { 00:18:50.091 "name": "BaseBdev4", 00:18:50.091 "aliases": [ 00:18:50.091 "4093d595-7dc0-4d06-856e-1dde3e0bdb4c" 00:18:50.091 ], 00:18:50.091 "product_name": "Malloc disk", 00:18:50.091 "block_size": 512, 00:18:50.091 "num_blocks": 65536, 00:18:50.091 "uuid": "4093d595-7dc0-4d06-856e-1dde3e0bdb4c", 00:18:50.091 "assigned_rate_limits": { 00:18:50.091 "rw_ios_per_sec": 0, 00:18:50.091 "rw_mbytes_per_sec": 0, 00:18:50.091 "r_mbytes_per_sec": 0, 00:18:50.091 "w_mbytes_per_sec": 0 00:18:50.091 }, 00:18:50.091 "claimed": true, 00:18:50.091 "claim_type": "exclusive_write", 00:18:50.091 "zoned": false, 00:18:50.091 "supported_io_types": { 00:18:50.091 "read": true, 00:18:50.091 "write": true, 00:18:50.091 "unmap": true, 00:18:50.091 "write_zeroes": true, 00:18:50.091 "flush": true, 00:18:50.091 "reset": true, 00:18:50.091 "compare": false, 00:18:50.091 "compare_and_write": false, 00:18:50.091 "abort": true, 00:18:50.091 "nvme_admin": false, 00:18:50.091 "nvme_io": false 00:18:50.091 }, 00:18:50.091 "memory_domains": [ 00:18:50.091 { 00:18:50.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.091 "dma_device_type": 2 00:18:50.091 } 00:18:50.091 ], 00:18:50.091 "driver_specific": {} 00:18:50.091 } 00:18:50.091 ] 00:18:50.091 16:36:21 -- common/autotest_common.sh@895 -- # return 0 00:18:50.091 16:36:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:50.091 16:36:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:50.091 16:36:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:50.091 16:36:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:50.091 16:36:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:50.091 16:36:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:50.091 16:36:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:50.091 16:36:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:50.091 16:36:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.091 16:36:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.091 16:36:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.092 16:36:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.092 16:36:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.092 16:36:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.350 16:36:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.350 "name": "Existed_Raid", 00:18:50.350 "uuid": "21c9c3e3-790d-472a-b551-f6b396d503d2", 00:18:50.350 "strip_size_kb": 0, 00:18:50.350 "state": "online", 00:18:50.350 "raid_level": "raid1", 00:18:50.350 "superblock": false, 00:18:50.350 "num_base_bdevs": 4, 00:18:50.350 "num_base_bdevs_discovered": 4, 00:18:50.350 "num_base_bdevs_operational": 4, 00:18:50.350 "base_bdevs_list": [ 00:18:50.350 { 00:18:50.350 "name": "BaseBdev1", 00:18:50.350 "uuid": "2743d801-e6a1-4c80-9e86-2d57d8c49f79", 00:18:50.350 "is_configured": true, 00:18:50.350 "data_offset": 0, 00:18:50.350 "data_size": 65536 00:18:50.350 }, 00:18:50.350 { 00:18:50.350 "name": "BaseBdev2", 00:18:50.350 "uuid": "48aad8e7-d4af-4ea1-9a70-ef5eaf1fc1f9", 00:18:50.350 "is_configured": true, 00:18:50.350 "data_offset": 0, 00:18:50.350 "data_size": 65536 00:18:50.350 }, 00:18:50.350 { 00:18:50.350 "name": "BaseBdev3", 00:18:50.350 "uuid": "8ecb59de-1478-4582-9131-2002512309af", 00:18:50.350 "is_configured": true, 00:18:50.350 "data_offset": 0, 00:18:50.350 "data_size": 65536 00:18:50.350 }, 00:18:50.350 { 00:18:50.350 "name": "BaseBdev4", 00:18:50.350 "uuid": "4093d595-7dc0-4d06-856e-1dde3e0bdb4c", 00:18:50.350 "is_configured": true, 00:18:50.350 "data_offset": 0, 00:18:50.350 "data_size": 65536 00:18:50.350 } 00:18:50.350 ] 00:18:50.350 }' 00:18:50.350 16:36:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.350 16:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:51.284 [2024-07-13 16:36:22.571787] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.284 16:36:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.542 16:36:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.542 "name": "Existed_Raid", 00:18:51.542 "uuid": "21c9c3e3-790d-472a-b551-f6b396d503d2", 00:18:51.542 "strip_size_kb": 0, 00:18:51.542 "state": "online", 00:18:51.542 "raid_level": "raid1", 00:18:51.542 "superblock": false, 00:18:51.542 "num_base_bdevs": 4, 00:18:51.542 "num_base_bdevs_discovered": 3, 00:18:51.542 "num_base_bdevs_operational": 3, 00:18:51.542 "base_bdevs_list": [ 00:18:51.542 { 00:18:51.542 "name": null, 00:18:51.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.542 "is_configured": false, 00:18:51.542 "data_offset": 0, 00:18:51.542 "data_size": 65536 00:18:51.542 }, 00:18:51.542 { 00:18:51.542 "name": "BaseBdev2", 00:18:51.542 "uuid": "48aad8e7-d4af-4ea1-9a70-ef5eaf1fc1f9", 00:18:51.542 "is_configured": true, 00:18:51.542 "data_offset": 0, 00:18:51.542 "data_size": 65536 00:18:51.542 }, 00:18:51.542 { 00:18:51.542 "name": "BaseBdev3", 00:18:51.542 "uuid": "8ecb59de-1478-4582-9131-2002512309af", 00:18:51.542 "is_configured": true, 00:18:51.542 "data_offset": 0, 00:18:51.542 "data_size": 65536 00:18:51.542 }, 00:18:51.542 { 00:18:51.542 "name": "BaseBdev4", 00:18:51.542 "uuid": "4093d595-7dc0-4d06-856e-1dde3e0bdb4c", 00:18:51.542 "is_configured": true, 00:18:51.542 "data_offset": 0, 00:18:51.542 "data_size": 65536 00:18:51.542 } 00:18:51.542 ] 00:18:51.542 }' 00:18:51.542 16:36:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.542 16:36:22 -- common/autotest_common.sh@10 -- # set +x 00:18:52.106 16:36:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:52.106 16:36:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:52.106 16:36:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.106 16:36:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:52.363 16:36:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:52.363 16:36:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:52.363 16:36:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:52.621 [2024-07-13 16:36:24.038564] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:52.621 16:36:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:52.621 16:36:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:52.621 16:36:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.621 16:36:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:52.878 16:36:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:52.878 16:36:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:52.878 16:36:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:53.135 [2024-07-13 16:36:24.482164] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:53.135 16:36:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:53.135 16:36:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:53.135 16:36:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.135 16:36:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:53.392 16:36:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:53.392 16:36:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:53.392 16:36:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:53.650 [2024-07-13 16:36:24.944557] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:53.650 [2024-07-13 16:36:24.944893] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:53.650 [2024-07-13 16:36:24.945090] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.650 [2024-07-13 16:36:24.967298] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.650 [2024-07-13 16:36:24.967602] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:18:53.650 16:36:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:53.650 16:36:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:53.650 16:36:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:53.650 16:36:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.908 16:36:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:53.908 16:36:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:53.908 16:36:25 -- bdev/bdev_raid.sh@287 -- # killprocess 131420 00:18:53.908 16:36:25 -- common/autotest_common.sh@926 -- # '[' -z 131420 ']' 00:18:53.908 16:36:25 -- common/autotest_common.sh@930 -- # kill -0 131420 00:18:53.908 16:36:25 -- common/autotest_common.sh@931 -- # uname 00:18:53.908 16:36:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:53.908 16:36:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131420 00:18:53.908 16:36:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:53.908 16:36:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:53.908 16:36:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131420' 00:18:53.908 killing process with pid 131420 00:18:53.908 16:36:25 -- common/autotest_common.sh@945 -- # kill 131420 00:18:53.908 16:36:25 -- common/autotest_common.sh@950 -- # wait 131420 00:18:53.908 [2024-07-13 16:36:25.238686] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:53.908 [2024-07-13 16:36:25.238846] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:54.474 16:36:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:54.474 00:18:54.474 real 0m14.006s 00:18:54.474 user 0m24.814s 00:18:54.474 sys 0m2.568s 00:18:54.474 16:36:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:54.474 16:36:25 -- common/autotest_common.sh@10 -- # set +x 00:18:54.474 ************************************ 00:18:54.474 END TEST raid_state_function_test 00:18:54.474 ************************************ 00:18:54.474 16:36:25 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:18:54.474 16:36:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:54.474 16:36:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:54.474 16:36:25 -- common/autotest_common.sh@10 -- # set +x 00:18:54.474 ************************************ 00:18:54.475 START TEST raid_state_function_test_sb 00:18:54.475 ************************************ 00:18:54.475 16:36:25 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=131847 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131847' 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:54.475 Process raid pid: 131847 00:18:54.475 16:36:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131847 /var/tmp/spdk-raid.sock 00:18:54.475 16:36:25 -- common/autotest_common.sh@819 -- # '[' -z 131847 ']' 00:18:54.475 16:36:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:54.475 16:36:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:54.475 16:36:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:54.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:54.475 16:36:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:54.475 16:36:25 -- common/autotest_common.sh@10 -- # set +x 00:18:54.475 [2024-07-13 16:36:25.824223] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:54.475 [2024-07-13 16:36:25.824927] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.732 [2024-07-13 16:36:25.978483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.732 [2024-07-13 16:36:26.071268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.732 [2024-07-13 16:36:26.153209] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:55.314 16:36:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:55.314 16:36:26 -- common/autotest_common.sh@852 -- # return 0 00:18:55.314 16:36:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:55.572 [2024-07-13 16:36:26.928477] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:55.572 [2024-07-13 16:36:26.928924] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:55.572 [2024-07-13 16:36:26.929059] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:55.572 [2024-07-13 16:36:26.929123] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:55.572 [2024-07-13 16:36:26.929152] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:55.572 [2024-07-13 16:36:26.929273] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:55.572 [2024-07-13 16:36:26.929308] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:55.572 [2024-07-13 16:36:26.929362] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:55.572 16:36:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:55.572 16:36:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:55.572 16:36:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:55.572 16:36:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:55.572 16:36:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:55.572 16:36:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:55.572 16:36:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.572 16:36:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.572 16:36:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.572 16:36:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.572 16:36:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.572 16:36:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.830 16:36:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.830 "name": "Existed_Raid", 00:18:55.830 "uuid": "3d41b7f0-e5bb-4c33-abb4-4222b50ab767", 00:18:55.830 "strip_size_kb": 0, 00:18:55.830 "state": "configuring", 00:18:55.830 "raid_level": "raid1", 00:18:55.830 "superblock": true, 00:18:55.830 "num_base_bdevs": 4, 00:18:55.830 "num_base_bdevs_discovered": 0, 00:18:55.830 "num_base_bdevs_operational": 4, 00:18:55.830 "base_bdevs_list": [ 00:18:55.830 { 00:18:55.830 "name": "BaseBdev1", 00:18:55.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.830 "is_configured": false, 00:18:55.830 "data_offset": 0, 00:18:55.830 "data_size": 0 00:18:55.830 }, 00:18:55.830 { 00:18:55.830 "name": "BaseBdev2", 00:18:55.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.830 "is_configured": false, 00:18:55.830 "data_offset": 0, 00:18:55.830 "data_size": 0 00:18:55.830 }, 00:18:55.830 { 00:18:55.831 "name": "BaseBdev3", 00:18:55.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.831 "is_configured": false, 00:18:55.831 "data_offset": 0, 00:18:55.831 "data_size": 0 00:18:55.831 }, 00:18:55.831 { 00:18:55.831 "name": "BaseBdev4", 00:18:55.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.831 "is_configured": false, 00:18:55.831 "data_offset": 0, 00:18:55.831 "data_size": 0 00:18:55.831 } 00:18:55.831 ] 00:18:55.831 }' 00:18:55.831 16:36:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.831 16:36:27 -- common/autotest_common.sh@10 -- # set +x 00:18:56.396 16:36:27 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:56.653 [2024-07-13 16:36:27.988579] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:56.653 [2024-07-13 16:36:27.988966] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:56.653 16:36:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:56.911 [2024-07-13 16:36:28.188716] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:56.911 [2024-07-13 16:36:28.189078] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:56.911 [2024-07-13 16:36:28.189180] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:56.911 [2024-07-13 16:36:28.189247] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:56.911 [2024-07-13 16:36:28.189276] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:56.911 [2024-07-13 16:36:28.189317] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:56.911 [2024-07-13 16:36:28.189400] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:56.911 [2024-07-13 16:36:28.189460] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:56.911 16:36:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:57.169 [2024-07-13 16:36:28.481465] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.169 BaseBdev1 00:18:57.169 16:36:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:57.169 16:36:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:57.169 16:36:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:57.169 16:36:28 -- common/autotest_common.sh@889 -- # local i 00:18:57.169 16:36:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:57.169 16:36:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:57.169 16:36:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:57.427 16:36:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:57.427 [ 00:18:57.427 { 00:18:57.427 "name": "BaseBdev1", 00:18:57.427 "aliases": [ 00:18:57.427 "f68cc4d4-fec6-46fd-a5a0-15a5aee9c05c" 00:18:57.427 ], 00:18:57.427 "product_name": "Malloc disk", 00:18:57.427 "block_size": 512, 00:18:57.427 "num_blocks": 65536, 00:18:57.427 "uuid": "f68cc4d4-fec6-46fd-a5a0-15a5aee9c05c", 00:18:57.427 "assigned_rate_limits": { 00:18:57.427 "rw_ios_per_sec": 0, 00:18:57.427 "rw_mbytes_per_sec": 0, 00:18:57.427 "r_mbytes_per_sec": 0, 00:18:57.427 "w_mbytes_per_sec": 0 00:18:57.427 }, 00:18:57.427 "claimed": true, 00:18:57.427 "claim_type": "exclusive_write", 00:18:57.427 "zoned": false, 00:18:57.427 "supported_io_types": { 00:18:57.427 "read": true, 00:18:57.427 "write": true, 00:18:57.427 "unmap": true, 00:18:57.427 "write_zeroes": true, 00:18:57.427 "flush": true, 00:18:57.427 "reset": true, 00:18:57.427 "compare": false, 00:18:57.427 "compare_and_write": false, 00:18:57.427 "abort": true, 00:18:57.427 "nvme_admin": false, 00:18:57.427 "nvme_io": false 00:18:57.427 }, 00:18:57.427 "memory_domains": [ 00:18:57.427 { 00:18:57.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.427 "dma_device_type": 2 00:18:57.427 } 00:18:57.427 ], 00:18:57.427 "driver_specific": {} 00:18:57.427 } 00:18:57.427 ] 00:18:57.684 16:36:28 -- common/autotest_common.sh@895 -- # return 0 00:18:57.684 16:36:28 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:57.684 16:36:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:57.684 16:36:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:57.684 16:36:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:57.684 16:36:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:57.684 16:36:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:57.684 16:36:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:57.684 16:36:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:57.684 16:36:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:57.684 16:36:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:57.684 16:36:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.684 16:36:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.942 16:36:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:57.942 "name": "Existed_Raid", 00:18:57.942 "uuid": "b2c70cb4-3df3-456d-ba14-c2b24cda9594", 00:18:57.942 "strip_size_kb": 0, 00:18:57.942 "state": "configuring", 00:18:57.942 "raid_level": "raid1", 00:18:57.942 "superblock": true, 00:18:57.942 "num_base_bdevs": 4, 00:18:57.942 "num_base_bdevs_discovered": 1, 00:18:57.942 "num_base_bdevs_operational": 4, 00:18:57.942 "base_bdevs_list": [ 00:18:57.942 { 00:18:57.942 "name": "BaseBdev1", 00:18:57.942 "uuid": "f68cc4d4-fec6-46fd-a5a0-15a5aee9c05c", 00:18:57.942 "is_configured": true, 00:18:57.942 "data_offset": 2048, 00:18:57.942 "data_size": 63488 00:18:57.942 }, 00:18:57.942 { 00:18:57.942 "name": "BaseBdev2", 00:18:57.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.942 "is_configured": false, 00:18:57.942 "data_offset": 0, 00:18:57.942 "data_size": 0 00:18:57.942 }, 00:18:57.942 { 00:18:57.942 "name": "BaseBdev3", 00:18:57.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.942 "is_configured": false, 00:18:57.942 "data_offset": 0, 00:18:57.942 "data_size": 0 00:18:57.942 }, 00:18:57.942 { 00:18:57.942 "name": "BaseBdev4", 00:18:57.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.942 "is_configured": false, 00:18:57.942 "data_offset": 0, 00:18:57.942 "data_size": 0 00:18:57.942 } 00:18:57.942 ] 00:18:57.942 }' 00:18:57.942 16:36:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:57.942 16:36:29 -- common/autotest_common.sh@10 -- # set +x 00:18:58.507 16:36:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:58.507 [2024-07-13 16:36:29.929871] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:58.507 [2024-07-13 16:36:29.930268] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:58.507 16:36:29 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:58.507 16:36:29 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:58.766 16:36:30 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:59.024 BaseBdev1 00:18:59.024 16:36:30 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:59.024 16:36:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:59.024 16:36:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:59.024 16:36:30 -- common/autotest_common.sh@889 -- # local i 00:18:59.024 16:36:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:59.024 16:36:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:59.024 16:36:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:59.281 16:36:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:59.539 [ 00:18:59.539 { 00:18:59.539 "name": "BaseBdev1", 00:18:59.539 "aliases": [ 00:18:59.539 "5967d059-3665-4df9-bb15-4388ea156b49" 00:18:59.539 ], 00:18:59.539 "product_name": "Malloc disk", 00:18:59.539 "block_size": 512, 00:18:59.539 "num_blocks": 65536, 00:18:59.539 "uuid": "5967d059-3665-4df9-bb15-4388ea156b49", 00:18:59.539 "assigned_rate_limits": { 00:18:59.539 "rw_ios_per_sec": 0, 00:18:59.539 "rw_mbytes_per_sec": 0, 00:18:59.539 "r_mbytes_per_sec": 0, 00:18:59.539 "w_mbytes_per_sec": 0 00:18:59.539 }, 00:18:59.539 "claimed": false, 00:18:59.539 "zoned": false, 00:18:59.539 "supported_io_types": { 00:18:59.539 "read": true, 00:18:59.539 "write": true, 00:18:59.539 "unmap": true, 00:18:59.539 "write_zeroes": true, 00:18:59.539 "flush": true, 00:18:59.539 "reset": true, 00:18:59.539 "compare": false, 00:18:59.539 "compare_and_write": false, 00:18:59.539 "abort": true, 00:18:59.539 "nvme_admin": false, 00:18:59.539 "nvme_io": false 00:18:59.539 }, 00:18:59.539 "memory_domains": [ 00:18:59.539 { 00:18:59.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.539 "dma_device_type": 2 00:18:59.539 } 00:18:59.539 ], 00:18:59.539 "driver_specific": {} 00:18:59.539 } 00:18:59.539 ] 00:18:59.539 16:36:30 -- common/autotest_common.sh@895 -- # return 0 00:18:59.539 16:36:30 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:59.796 [2024-07-13 16:36:31.068378] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.796 [2024-07-13 16:36:31.071226] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.796 [2024-07-13 16:36:31.071471] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.796 [2024-07-13 16:36:31.071568] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:59.796 [2024-07-13 16:36:31.071634] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:59.796 [2024-07-13 16:36:31.071664] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:59.796 [2024-07-13 16:36:31.071761] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.796 16:36:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.054 16:36:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:00.054 "name": "Existed_Raid", 00:19:00.054 "uuid": "8666ff7f-bf73-4b53-ad7a-a153363f96fb", 00:19:00.054 "strip_size_kb": 0, 00:19:00.054 "state": "configuring", 00:19:00.054 "raid_level": "raid1", 00:19:00.054 "superblock": true, 00:19:00.054 "num_base_bdevs": 4, 00:19:00.054 "num_base_bdevs_discovered": 1, 00:19:00.054 "num_base_bdevs_operational": 4, 00:19:00.054 "base_bdevs_list": [ 00:19:00.054 { 00:19:00.054 "name": "BaseBdev1", 00:19:00.054 "uuid": "5967d059-3665-4df9-bb15-4388ea156b49", 00:19:00.054 "is_configured": true, 00:19:00.054 "data_offset": 2048, 00:19:00.054 "data_size": 63488 00:19:00.054 }, 00:19:00.054 { 00:19:00.054 "name": "BaseBdev2", 00:19:00.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.054 "is_configured": false, 00:19:00.054 "data_offset": 0, 00:19:00.054 "data_size": 0 00:19:00.054 }, 00:19:00.054 { 00:19:00.054 "name": "BaseBdev3", 00:19:00.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.054 "is_configured": false, 00:19:00.054 "data_offset": 0, 00:19:00.054 "data_size": 0 00:19:00.054 }, 00:19:00.054 { 00:19:00.054 "name": "BaseBdev4", 00:19:00.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.054 "is_configured": false, 00:19:00.054 "data_offset": 0, 00:19:00.054 "data_size": 0 00:19:00.054 } 00:19:00.054 ] 00:19:00.054 }' 00:19:00.054 16:36:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:00.054 16:36:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.619 16:36:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:00.877 [2024-07-13 16:36:32.170858] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.877 BaseBdev2 00:19:00.877 16:36:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:00.877 16:36:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:00.877 16:36:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:00.877 16:36:32 -- common/autotest_common.sh@889 -- # local i 00:19:00.877 16:36:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:00.877 16:36:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:00.877 16:36:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:01.134 16:36:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:01.392 [ 00:19:01.392 { 00:19:01.392 "name": "BaseBdev2", 00:19:01.392 "aliases": [ 00:19:01.392 "8d711c0e-7670-4afe-af45-cf87eb2b3146" 00:19:01.392 ], 00:19:01.392 "product_name": "Malloc disk", 00:19:01.392 "block_size": 512, 00:19:01.392 "num_blocks": 65536, 00:19:01.392 "uuid": "8d711c0e-7670-4afe-af45-cf87eb2b3146", 00:19:01.392 "assigned_rate_limits": { 00:19:01.392 "rw_ios_per_sec": 0, 00:19:01.392 "rw_mbytes_per_sec": 0, 00:19:01.392 "r_mbytes_per_sec": 0, 00:19:01.392 "w_mbytes_per_sec": 0 00:19:01.392 }, 00:19:01.392 "claimed": true, 00:19:01.392 "claim_type": "exclusive_write", 00:19:01.392 "zoned": false, 00:19:01.392 "supported_io_types": { 00:19:01.392 "read": true, 00:19:01.392 "write": true, 00:19:01.392 "unmap": true, 00:19:01.392 "write_zeroes": true, 00:19:01.392 "flush": true, 00:19:01.392 "reset": true, 00:19:01.392 "compare": false, 00:19:01.392 "compare_and_write": false, 00:19:01.392 "abort": true, 00:19:01.392 "nvme_admin": false, 00:19:01.392 "nvme_io": false 00:19:01.392 }, 00:19:01.392 "memory_domains": [ 00:19:01.392 { 00:19:01.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.392 "dma_device_type": 2 00:19:01.392 } 00:19:01.392 ], 00:19:01.392 "driver_specific": {} 00:19:01.392 } 00:19:01.392 ] 00:19:01.392 16:36:32 -- common/autotest_common.sh@895 -- # return 0 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.392 16:36:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.650 16:36:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:01.650 "name": "Existed_Raid", 00:19:01.650 "uuid": "8666ff7f-bf73-4b53-ad7a-a153363f96fb", 00:19:01.650 "strip_size_kb": 0, 00:19:01.650 "state": "configuring", 00:19:01.650 "raid_level": "raid1", 00:19:01.650 "superblock": true, 00:19:01.650 "num_base_bdevs": 4, 00:19:01.650 "num_base_bdevs_discovered": 2, 00:19:01.650 "num_base_bdevs_operational": 4, 00:19:01.650 "base_bdevs_list": [ 00:19:01.650 { 00:19:01.650 "name": "BaseBdev1", 00:19:01.650 "uuid": "5967d059-3665-4df9-bb15-4388ea156b49", 00:19:01.650 "is_configured": true, 00:19:01.650 "data_offset": 2048, 00:19:01.650 "data_size": 63488 00:19:01.650 }, 00:19:01.650 { 00:19:01.650 "name": "BaseBdev2", 00:19:01.650 "uuid": "8d711c0e-7670-4afe-af45-cf87eb2b3146", 00:19:01.650 "is_configured": true, 00:19:01.650 "data_offset": 2048, 00:19:01.650 "data_size": 63488 00:19:01.650 }, 00:19:01.650 { 00:19:01.650 "name": "BaseBdev3", 00:19:01.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.650 "is_configured": false, 00:19:01.650 "data_offset": 0, 00:19:01.650 "data_size": 0 00:19:01.650 }, 00:19:01.650 { 00:19:01.650 "name": "BaseBdev4", 00:19:01.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.650 "is_configured": false, 00:19:01.650 "data_offset": 0, 00:19:01.650 "data_size": 0 00:19:01.650 } 00:19:01.650 ] 00:19:01.650 }' 00:19:01.650 16:36:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:01.650 16:36:32 -- common/autotest_common.sh@10 -- # set +x 00:19:02.215 16:36:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:02.474 [2024-07-13 16:36:33.769379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:02.474 BaseBdev3 00:19:02.474 16:36:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:02.474 16:36:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:02.474 16:36:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:02.474 16:36:33 -- common/autotest_common.sh@889 -- # local i 00:19:02.474 16:36:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:02.474 16:36:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:02.474 16:36:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.731 16:36:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:02.731 [ 00:19:02.731 { 00:19:02.731 "name": "BaseBdev3", 00:19:02.731 "aliases": [ 00:19:02.731 "e993472c-af4c-4951-9109-f3d224005130" 00:19:02.731 ], 00:19:02.731 "product_name": "Malloc disk", 00:19:02.731 "block_size": 512, 00:19:02.731 "num_blocks": 65536, 00:19:02.731 "uuid": "e993472c-af4c-4951-9109-f3d224005130", 00:19:02.731 "assigned_rate_limits": { 00:19:02.731 "rw_ios_per_sec": 0, 00:19:02.731 "rw_mbytes_per_sec": 0, 00:19:02.731 "r_mbytes_per_sec": 0, 00:19:02.731 "w_mbytes_per_sec": 0 00:19:02.731 }, 00:19:02.731 "claimed": true, 00:19:02.731 "claim_type": "exclusive_write", 00:19:02.731 "zoned": false, 00:19:02.731 "supported_io_types": { 00:19:02.731 "read": true, 00:19:02.731 "write": true, 00:19:02.731 "unmap": true, 00:19:02.731 "write_zeroes": true, 00:19:02.731 "flush": true, 00:19:02.731 "reset": true, 00:19:02.731 "compare": false, 00:19:02.731 "compare_and_write": false, 00:19:02.731 "abort": true, 00:19:02.731 "nvme_admin": false, 00:19:02.731 "nvme_io": false 00:19:02.731 }, 00:19:02.731 "memory_domains": [ 00:19:02.731 { 00:19:02.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.732 "dma_device_type": 2 00:19:02.732 } 00:19:02.732 ], 00:19:02.732 "driver_specific": {} 00:19:02.732 } 00:19:02.732 ] 00:19:02.732 16:36:34 -- common/autotest_common.sh@895 -- # return 0 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.732 16:36:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.299 16:36:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.299 "name": "Existed_Raid", 00:19:03.299 "uuid": "8666ff7f-bf73-4b53-ad7a-a153363f96fb", 00:19:03.299 "strip_size_kb": 0, 00:19:03.299 "state": "configuring", 00:19:03.299 "raid_level": "raid1", 00:19:03.299 "superblock": true, 00:19:03.299 "num_base_bdevs": 4, 00:19:03.299 "num_base_bdevs_discovered": 3, 00:19:03.299 "num_base_bdevs_operational": 4, 00:19:03.299 "base_bdevs_list": [ 00:19:03.299 { 00:19:03.299 "name": "BaseBdev1", 00:19:03.299 "uuid": "5967d059-3665-4df9-bb15-4388ea156b49", 00:19:03.299 "is_configured": true, 00:19:03.299 "data_offset": 2048, 00:19:03.299 "data_size": 63488 00:19:03.299 }, 00:19:03.299 { 00:19:03.299 "name": "BaseBdev2", 00:19:03.299 "uuid": "8d711c0e-7670-4afe-af45-cf87eb2b3146", 00:19:03.299 "is_configured": true, 00:19:03.299 "data_offset": 2048, 00:19:03.299 "data_size": 63488 00:19:03.299 }, 00:19:03.299 { 00:19:03.299 "name": "BaseBdev3", 00:19:03.299 "uuid": "e993472c-af4c-4951-9109-f3d224005130", 00:19:03.299 "is_configured": true, 00:19:03.299 "data_offset": 2048, 00:19:03.299 "data_size": 63488 00:19:03.299 }, 00:19:03.299 { 00:19:03.299 "name": "BaseBdev4", 00:19:03.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.299 "is_configured": false, 00:19:03.299 "data_offset": 0, 00:19:03.299 "data_size": 0 00:19:03.299 } 00:19:03.299 ] 00:19:03.299 }' 00:19:03.299 16:36:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.299 16:36:34 -- common/autotest_common.sh@10 -- # set +x 00:19:03.558 16:36:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:03.817 [2024-07-13 16:36:35.235977] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:03.817 [2024-07-13 16:36:35.236687] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:19:03.817 [2024-07-13 16:36:35.236817] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:03.817 [2024-07-13 16:36:35.237070] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:19:03.817 [2024-07-13 16:36:35.237667] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:19:03.817 [2024-07-13 16:36:35.237786] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:19:03.817 [2024-07-13 16:36:35.238066] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.817 BaseBdev4 00:19:03.817 16:36:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:03.817 16:36:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:03.817 16:36:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:03.817 16:36:35 -- common/autotest_common.sh@889 -- # local i 00:19:03.817 16:36:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:03.817 16:36:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:03.817 16:36:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:04.081 16:36:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:04.377 [ 00:19:04.377 { 00:19:04.377 "name": "BaseBdev4", 00:19:04.377 "aliases": [ 00:19:04.377 "62c48331-4d92-46a5-9e66-6558ee55e485" 00:19:04.377 ], 00:19:04.377 "product_name": "Malloc disk", 00:19:04.377 "block_size": 512, 00:19:04.377 "num_blocks": 65536, 00:19:04.377 "uuid": "62c48331-4d92-46a5-9e66-6558ee55e485", 00:19:04.377 "assigned_rate_limits": { 00:19:04.377 "rw_ios_per_sec": 0, 00:19:04.377 "rw_mbytes_per_sec": 0, 00:19:04.377 "r_mbytes_per_sec": 0, 00:19:04.377 "w_mbytes_per_sec": 0 00:19:04.377 }, 00:19:04.377 "claimed": true, 00:19:04.377 "claim_type": "exclusive_write", 00:19:04.377 "zoned": false, 00:19:04.377 "supported_io_types": { 00:19:04.377 "read": true, 00:19:04.377 "write": true, 00:19:04.377 "unmap": true, 00:19:04.377 "write_zeroes": true, 00:19:04.377 "flush": true, 00:19:04.377 "reset": true, 00:19:04.377 "compare": false, 00:19:04.377 "compare_and_write": false, 00:19:04.377 "abort": true, 00:19:04.377 "nvme_admin": false, 00:19:04.377 "nvme_io": false 00:19:04.377 }, 00:19:04.377 "memory_domains": [ 00:19:04.377 { 00:19:04.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.377 "dma_device_type": 2 00:19:04.377 } 00:19:04.377 ], 00:19:04.377 "driver_specific": {} 00:19:04.377 } 00:19:04.377 ] 00:19:04.377 16:36:35 -- common/autotest_common.sh@895 -- # return 0 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.377 16:36:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.635 16:36:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:04.635 "name": "Existed_Raid", 00:19:04.635 "uuid": "8666ff7f-bf73-4b53-ad7a-a153363f96fb", 00:19:04.635 "strip_size_kb": 0, 00:19:04.635 "state": "online", 00:19:04.635 "raid_level": "raid1", 00:19:04.635 "superblock": true, 00:19:04.635 "num_base_bdevs": 4, 00:19:04.635 "num_base_bdevs_discovered": 4, 00:19:04.635 "num_base_bdevs_operational": 4, 00:19:04.635 "base_bdevs_list": [ 00:19:04.635 { 00:19:04.635 "name": "BaseBdev1", 00:19:04.635 "uuid": "5967d059-3665-4df9-bb15-4388ea156b49", 00:19:04.635 "is_configured": true, 00:19:04.635 "data_offset": 2048, 00:19:04.635 "data_size": 63488 00:19:04.635 }, 00:19:04.635 { 00:19:04.635 "name": "BaseBdev2", 00:19:04.635 "uuid": "8d711c0e-7670-4afe-af45-cf87eb2b3146", 00:19:04.635 "is_configured": true, 00:19:04.635 "data_offset": 2048, 00:19:04.635 "data_size": 63488 00:19:04.635 }, 00:19:04.635 { 00:19:04.635 "name": "BaseBdev3", 00:19:04.635 "uuid": "e993472c-af4c-4951-9109-f3d224005130", 00:19:04.635 "is_configured": true, 00:19:04.635 "data_offset": 2048, 00:19:04.635 "data_size": 63488 00:19:04.635 }, 00:19:04.635 { 00:19:04.635 "name": "BaseBdev4", 00:19:04.635 "uuid": "62c48331-4d92-46a5-9e66-6558ee55e485", 00:19:04.635 "is_configured": true, 00:19:04.635 "data_offset": 2048, 00:19:04.635 "data_size": 63488 00:19:04.635 } 00:19:04.635 ] 00:19:04.635 }' 00:19:04.635 16:36:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:04.635 16:36:35 -- common/autotest_common.sh@10 -- # set +x 00:19:05.203 16:36:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:05.461 [2024-07-13 16:36:36.720484] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:05.461 16:36:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:05.461 16:36:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:05.461 16:36:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:05.461 16:36:36 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:05.461 16:36:36 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:05.461 16:36:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:05.461 16:36:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:05.461 16:36:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:05.461 16:36:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:05.462 16:36:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:05.462 16:36:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:05.462 16:36:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.462 16:36:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.462 16:36:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.462 16:36:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.462 16:36:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.462 16:36:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.719 16:36:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:05.719 "name": "Existed_Raid", 00:19:05.719 "uuid": "8666ff7f-bf73-4b53-ad7a-a153363f96fb", 00:19:05.719 "strip_size_kb": 0, 00:19:05.719 "state": "online", 00:19:05.719 "raid_level": "raid1", 00:19:05.719 "superblock": true, 00:19:05.719 "num_base_bdevs": 4, 00:19:05.719 "num_base_bdevs_discovered": 3, 00:19:05.719 "num_base_bdevs_operational": 3, 00:19:05.719 "base_bdevs_list": [ 00:19:05.719 { 00:19:05.719 "name": null, 00:19:05.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.719 "is_configured": false, 00:19:05.719 "data_offset": 2048, 00:19:05.719 "data_size": 63488 00:19:05.719 }, 00:19:05.719 { 00:19:05.719 "name": "BaseBdev2", 00:19:05.719 "uuid": "8d711c0e-7670-4afe-af45-cf87eb2b3146", 00:19:05.719 "is_configured": true, 00:19:05.719 "data_offset": 2048, 00:19:05.720 "data_size": 63488 00:19:05.720 }, 00:19:05.720 { 00:19:05.720 "name": "BaseBdev3", 00:19:05.720 "uuid": "e993472c-af4c-4951-9109-f3d224005130", 00:19:05.720 "is_configured": true, 00:19:05.720 "data_offset": 2048, 00:19:05.720 "data_size": 63488 00:19:05.720 }, 00:19:05.720 { 00:19:05.720 "name": "BaseBdev4", 00:19:05.720 "uuid": "62c48331-4d92-46a5-9e66-6558ee55e485", 00:19:05.720 "is_configured": true, 00:19:05.720 "data_offset": 2048, 00:19:05.720 "data_size": 63488 00:19:05.720 } 00:19:05.720 ] 00:19:05.720 }' 00:19:05.720 16:36:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:05.720 16:36:36 -- common/autotest_common.sh@10 -- # set +x 00:19:06.287 16:36:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:06.287 16:36:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:06.287 16:36:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.287 16:36:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:06.545 16:36:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:06.545 16:36:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:06.545 16:36:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:06.802 [2024-07-13 16:36:38.022303] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:06.802 16:36:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:06.802 16:36:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:06.802 16:36:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.802 16:36:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:07.060 16:36:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:07.060 16:36:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:07.060 16:36:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:07.060 [2024-07-13 16:36:38.477210] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:07.060 16:36:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:07.060 16:36:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:07.060 16:36:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.060 16:36:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:07.319 16:36:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:07.319 16:36:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:07.319 16:36:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:07.577 [2024-07-13 16:36:38.952813] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:07.577 [2024-07-13 16:36:38.953157] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.577 [2024-07-13 16:36:38.953384] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.577 [2024-07-13 16:36:38.975190] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.577 [2024-07-13 16:36:38.975503] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:19:07.577 16:36:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:07.577 16:36:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:07.577 16:36:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.577 16:36:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:07.836 16:36:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:07.836 16:36:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:07.836 16:36:39 -- bdev/bdev_raid.sh@287 -- # killprocess 131847 00:19:07.836 16:36:39 -- common/autotest_common.sh@926 -- # '[' -z 131847 ']' 00:19:07.836 16:36:39 -- common/autotest_common.sh@930 -- # kill -0 131847 00:19:07.836 16:36:39 -- common/autotest_common.sh@931 -- # uname 00:19:07.836 16:36:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:07.836 16:36:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131847 00:19:07.836 16:36:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:07.836 16:36:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:07.836 16:36:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131847' 00:19:07.836 killing process with pid 131847 00:19:07.836 16:36:39 -- common/autotest_common.sh@945 -- # kill 131847 00:19:07.836 16:36:39 -- common/autotest_common.sh@950 -- # wait 131847 00:19:07.836 [2024-07-13 16:36:39.256817] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:07.836 [2024-07-13 16:36:39.256927] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:08.403 00:19:08.403 real 0m13.926s 00:19:08.403 user 0m24.561s 00:19:08.403 sys 0m2.492s 00:19:08.403 16:36:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.403 16:36:39 -- common/autotest_common.sh@10 -- # set +x 00:19:08.403 ************************************ 00:19:08.403 END TEST raid_state_function_test_sb 00:19:08.403 ************************************ 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:19:08.403 16:36:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:08.403 16:36:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:08.403 16:36:39 -- common/autotest_common.sh@10 -- # set +x 00:19:08.403 ************************************ 00:19:08.403 START TEST raid_superblock_test 00:19:08.403 ************************************ 00:19:08.403 16:36:39 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@357 -- # raid_pid=132289 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:08.403 16:36:39 -- bdev/bdev_raid.sh@358 -- # waitforlisten 132289 /var/tmp/spdk-raid.sock 00:19:08.403 16:36:39 -- common/autotest_common.sh@819 -- # '[' -z 132289 ']' 00:19:08.403 16:36:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:08.403 16:36:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:08.403 16:36:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:08.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:08.403 16:36:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:08.403 16:36:39 -- common/autotest_common.sh@10 -- # set +x 00:19:08.403 [2024-07-13 16:36:39.797743] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:08.403 [2024-07-13 16:36:39.798251] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132289 ] 00:19:08.661 [2024-07-13 16:36:39.944877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.661 [2024-07-13 16:36:40.036256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.661 [2024-07-13 16:36:40.117876] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.595 16:36:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:09.595 16:36:40 -- common/autotest_common.sh@852 -- # return 0 00:19:09.595 16:36:40 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:09.595 16:36:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:09.595 16:36:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:09.595 16:36:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:09.595 16:36:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:09.595 16:36:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:09.595 16:36:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:09.595 16:36:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:09.595 16:36:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:09.595 malloc1 00:19:09.595 16:36:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:09.853 [2024-07-13 16:36:41.234620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:09.853 [2024-07-13 16:36:41.235096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.853 [2024-07-13 16:36:41.235252] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:19:09.853 [2024-07-13 16:36:41.235430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.853 [2024-07-13 16:36:41.238820] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.853 [2024-07-13 16:36:41.239035] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:09.853 pt1 00:19:09.853 16:36:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:09.853 16:36:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:09.853 16:36:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:09.853 16:36:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:09.853 16:36:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:09.853 16:36:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:09.853 16:36:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:09.853 16:36:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:09.853 16:36:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:10.110 malloc2 00:19:10.110 16:36:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:10.368 [2024-07-13 16:36:41.740086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:10.368 [2024-07-13 16:36:41.740486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.368 [2024-07-13 16:36:41.740577] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:19:10.368 [2024-07-13 16:36:41.740741] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.368 [2024-07-13 16:36:41.743659] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.368 [2024-07-13 16:36:41.743865] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:10.368 pt2 00:19:10.368 16:36:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:10.368 16:36:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:10.368 16:36:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:10.368 16:36:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:10.368 16:36:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:10.368 16:36:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:10.368 16:36:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:10.368 16:36:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:10.368 16:36:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:10.626 malloc3 00:19:10.626 16:36:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:10.883 [2024-07-13 16:36:42.249116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:10.883 [2024-07-13 16:36:42.249570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.883 [2024-07-13 16:36:42.249664] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:10.884 [2024-07-13 16:36:42.249924] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.884 [2024-07-13 16:36:42.253011] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.884 [2024-07-13 16:36:42.253251] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:10.884 pt3 00:19:10.884 16:36:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:10.884 16:36:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:10.884 16:36:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:10.884 16:36:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:10.884 16:36:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:10.884 16:36:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:10.884 16:36:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:10.884 16:36:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:10.884 16:36:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:11.141 malloc4 00:19:11.141 16:36:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:11.400 [2024-07-13 16:36:42.742008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:11.400 [2024-07-13 16:36:42.742404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.400 [2024-07-13 16:36:42.742488] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:11.400 [2024-07-13 16:36:42.742625] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.400 [2024-07-13 16:36:42.745704] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.400 [2024-07-13 16:36:42.745929] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:11.400 pt4 00:19:11.400 16:36:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:11.400 16:36:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:11.400 16:36:42 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:11.657 [2024-07-13 16:36:43.002479] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:11.657 [2024-07-13 16:36:43.005439] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:11.657 [2024-07-13 16:36:43.005706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:11.657 [2024-07-13 16:36:43.005787] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:11.657 [2024-07-13 16:36:43.006128] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:19:11.657 [2024-07-13 16:36:43.006244] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:11.657 [2024-07-13 16:36:43.006452] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:19:11.657 [2024-07-13 16:36:43.007098] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:19:11.657 [2024-07-13 16:36:43.007214] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:19:11.657 [2024-07-13 16:36:43.007575] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.657 16:36:43 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:11.657 16:36:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:11.657 16:36:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:11.657 16:36:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:11.657 16:36:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:11.657 16:36:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:11.657 16:36:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.657 16:36:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.657 16:36:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.657 16:36:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.657 16:36:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.657 16:36:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.913 16:36:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.913 "name": "raid_bdev1", 00:19:11.913 "uuid": "d38b407f-7840-4267-a036-cfa1899152e7", 00:19:11.913 "strip_size_kb": 0, 00:19:11.913 "state": "online", 00:19:11.913 "raid_level": "raid1", 00:19:11.913 "superblock": true, 00:19:11.913 "num_base_bdevs": 4, 00:19:11.913 "num_base_bdevs_discovered": 4, 00:19:11.913 "num_base_bdevs_operational": 4, 00:19:11.913 "base_bdevs_list": [ 00:19:11.913 { 00:19:11.913 "name": "pt1", 00:19:11.913 "uuid": "5ca5f0fa-2fca-58f4-8c8a-657f86195c28", 00:19:11.913 "is_configured": true, 00:19:11.913 "data_offset": 2048, 00:19:11.913 "data_size": 63488 00:19:11.913 }, 00:19:11.913 { 00:19:11.913 "name": "pt2", 00:19:11.913 "uuid": "e7b7dfe1-69eb-5c1b-a97b-aedac00fa807", 00:19:11.913 "is_configured": true, 00:19:11.913 "data_offset": 2048, 00:19:11.913 "data_size": 63488 00:19:11.913 }, 00:19:11.913 { 00:19:11.913 "name": "pt3", 00:19:11.913 "uuid": "df251a19-9d8f-5d32-903f-22d22f15ee06", 00:19:11.913 "is_configured": true, 00:19:11.913 "data_offset": 2048, 00:19:11.913 "data_size": 63488 00:19:11.913 }, 00:19:11.913 { 00:19:11.913 "name": "pt4", 00:19:11.913 "uuid": "50a9b798-1a79-5d42-967e-dbd7bac618af", 00:19:11.913 "is_configured": true, 00:19:11.913 "data_offset": 2048, 00:19:11.913 "data_size": 63488 00:19:11.913 } 00:19:11.913 ] 00:19:11.913 }' 00:19:11.913 16:36:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.914 16:36:43 -- common/autotest_common.sh@10 -- # set +x 00:19:12.479 16:36:43 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:12.479 16:36:43 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:12.737 [2024-07-13 16:36:44.076017] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.737 16:36:44 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d38b407f-7840-4267-a036-cfa1899152e7 00:19:12.737 16:36:44 -- bdev/bdev_raid.sh@380 -- # '[' -z d38b407f-7840-4267-a036-cfa1899152e7 ']' 00:19:12.737 16:36:44 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:12.994 [2024-07-13 16:36:44.339780] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.994 [2024-07-13 16:36:44.340102] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.994 [2024-07-13 16:36:44.340419] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.994 [2024-07-13 16:36:44.340672] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.994 [2024-07-13 16:36:44.340818] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:19:12.994 16:36:44 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.994 16:36:44 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:13.253 16:36:44 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:13.253 16:36:44 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:13.253 16:36:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:13.253 16:36:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:13.511 16:36:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:13.511 16:36:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:13.770 16:36:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:13.770 16:36:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:14.029 16:36:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:14.029 16:36:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:14.029 16:36:45 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:14.029 16:36:45 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:14.288 16:36:45 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:14.288 16:36:45 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:14.288 16:36:45 -- common/autotest_common.sh@640 -- # local es=0 00:19:14.288 16:36:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:14.288 16:36:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:14.288 16:36:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:14.288 16:36:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:14.288 16:36:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:14.288 16:36:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:14.288 16:36:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:14.288 16:36:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:14.288 16:36:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:14.288 16:36:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:14.547 [2024-07-13 16:36:45.852016] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:14.547 [2024-07-13 16:36:45.855194] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:14.547 [2024-07-13 16:36:45.855492] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:14.547 [2024-07-13 16:36:45.855569] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:14.547 [2024-07-13 16:36:45.855727] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:14.547 [2024-07-13 16:36:45.856002] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:14.547 [2024-07-13 16:36:45.856149] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:14.547 [2024-07-13 16:36:45.856247] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:14.547 [2024-07-13 16:36:45.856447] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:14.547 [2024-07-13 16:36:45.856609] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:19:14.547 request: 00:19:14.547 { 00:19:14.547 "name": "raid_bdev1", 00:19:14.547 "raid_level": "raid1", 00:19:14.547 "base_bdevs": [ 00:19:14.547 "malloc1", 00:19:14.547 "malloc2", 00:19:14.547 "malloc3", 00:19:14.547 "malloc4" 00:19:14.547 ], 00:19:14.547 "superblock": false, 00:19:14.547 "method": "bdev_raid_create", 00:19:14.547 "req_id": 1 00:19:14.547 } 00:19:14.547 Got JSON-RPC error response 00:19:14.547 response: 00:19:14.547 { 00:19:14.547 "code": -17, 00:19:14.547 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:14.547 } 00:19:14.547 16:36:45 -- common/autotest_common.sh@643 -- # es=1 00:19:14.547 16:36:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:14.547 16:36:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:14.547 16:36:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:14.547 16:36:45 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.547 16:36:45 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:14.805 16:36:46 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:14.805 16:36:46 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:14.805 16:36:46 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:15.064 [2024-07-13 16:36:46.361126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:15.064 [2024-07-13 16:36:46.361463] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.064 [2024-07-13 16:36:46.361638] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:15.064 [2024-07-13 16:36:46.361767] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.064 [2024-07-13 16:36:46.365343] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.064 [2024-07-13 16:36:46.365629] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:15.064 [2024-07-13 16:36:46.365889] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:15.064 [2024-07-13 16:36:46.366108] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:15.064 pt1 00:19:15.064 16:36:46 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:15.064 16:36:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:15.064 16:36:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:15.064 16:36:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:15.064 16:36:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:15.064 16:36:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:15.064 16:36:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:15.064 16:36:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:15.064 16:36:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:15.064 16:36:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:15.064 16:36:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.064 16:36:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.327 16:36:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:15.327 "name": "raid_bdev1", 00:19:15.327 "uuid": "d38b407f-7840-4267-a036-cfa1899152e7", 00:19:15.327 "strip_size_kb": 0, 00:19:15.327 "state": "configuring", 00:19:15.327 "raid_level": "raid1", 00:19:15.327 "superblock": true, 00:19:15.327 "num_base_bdevs": 4, 00:19:15.327 "num_base_bdevs_discovered": 1, 00:19:15.327 "num_base_bdevs_operational": 4, 00:19:15.327 "base_bdevs_list": [ 00:19:15.327 { 00:19:15.327 "name": "pt1", 00:19:15.327 "uuid": "5ca5f0fa-2fca-58f4-8c8a-657f86195c28", 00:19:15.327 "is_configured": true, 00:19:15.327 "data_offset": 2048, 00:19:15.327 "data_size": 63488 00:19:15.327 }, 00:19:15.327 { 00:19:15.327 "name": null, 00:19:15.327 "uuid": "e7b7dfe1-69eb-5c1b-a97b-aedac00fa807", 00:19:15.327 "is_configured": false, 00:19:15.327 "data_offset": 2048, 00:19:15.327 "data_size": 63488 00:19:15.327 }, 00:19:15.327 { 00:19:15.327 "name": null, 00:19:15.327 "uuid": "df251a19-9d8f-5d32-903f-22d22f15ee06", 00:19:15.327 "is_configured": false, 00:19:15.327 "data_offset": 2048, 00:19:15.327 "data_size": 63488 00:19:15.327 }, 00:19:15.327 { 00:19:15.327 "name": null, 00:19:15.327 "uuid": "50a9b798-1a79-5d42-967e-dbd7bac618af", 00:19:15.327 "is_configured": false, 00:19:15.327 "data_offset": 2048, 00:19:15.327 "data_size": 63488 00:19:15.327 } 00:19:15.327 ] 00:19:15.327 }' 00:19:15.327 16:36:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:15.327 16:36:46 -- common/autotest_common.sh@10 -- # set +x 00:19:15.898 16:36:47 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:15.898 16:36:47 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:16.156 [2024-07-13 16:36:47.562287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:16.156 [2024-07-13 16:36:47.562722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.156 [2024-07-13 16:36:47.562907] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:16.156 [2024-07-13 16:36:47.563044] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.156 [2024-07-13 16:36:47.563686] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.156 [2024-07-13 16:36:47.563880] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:16.156 [2024-07-13 16:36:47.564101] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:16.156 [2024-07-13 16:36:47.564206] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:16.156 pt2 00:19:16.156 16:36:47 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:16.414 [2024-07-13 16:36:47.846363] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:16.415 16:36:47 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:16.415 16:36:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:16.415 16:36:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:16.415 16:36:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:16.415 16:36:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:16.415 16:36:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:16.415 16:36:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:16.415 16:36:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:16.415 16:36:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:16.415 16:36:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:16.415 16:36:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.415 16:36:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.982 16:36:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:16.982 "name": "raid_bdev1", 00:19:16.982 "uuid": "d38b407f-7840-4267-a036-cfa1899152e7", 00:19:16.982 "strip_size_kb": 0, 00:19:16.982 "state": "configuring", 00:19:16.982 "raid_level": "raid1", 00:19:16.982 "superblock": true, 00:19:16.982 "num_base_bdevs": 4, 00:19:16.982 "num_base_bdevs_discovered": 1, 00:19:16.982 "num_base_bdevs_operational": 4, 00:19:16.982 "base_bdevs_list": [ 00:19:16.982 { 00:19:16.982 "name": "pt1", 00:19:16.982 "uuid": "5ca5f0fa-2fca-58f4-8c8a-657f86195c28", 00:19:16.982 "is_configured": true, 00:19:16.982 "data_offset": 2048, 00:19:16.982 "data_size": 63488 00:19:16.982 }, 00:19:16.982 { 00:19:16.982 "name": null, 00:19:16.982 "uuid": "e7b7dfe1-69eb-5c1b-a97b-aedac00fa807", 00:19:16.982 "is_configured": false, 00:19:16.982 "data_offset": 2048, 00:19:16.982 "data_size": 63488 00:19:16.982 }, 00:19:16.982 { 00:19:16.982 "name": null, 00:19:16.982 "uuid": "df251a19-9d8f-5d32-903f-22d22f15ee06", 00:19:16.982 "is_configured": false, 00:19:16.982 "data_offset": 2048, 00:19:16.982 "data_size": 63488 00:19:16.982 }, 00:19:16.982 { 00:19:16.982 "name": null, 00:19:16.982 "uuid": "50a9b798-1a79-5d42-967e-dbd7bac618af", 00:19:16.982 "is_configured": false, 00:19:16.982 "data_offset": 2048, 00:19:16.982 "data_size": 63488 00:19:16.982 } 00:19:16.982 ] 00:19:16.982 }' 00:19:16.982 16:36:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:16.982 16:36:48 -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 16:36:48 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:17.549 16:36:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:17.549 16:36:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:17.549 [2024-07-13 16:36:48.974589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:17.549 [2024-07-13 16:36:48.975015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.549 [2024-07-13 16:36:48.975124] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:17.549 [2024-07-13 16:36:48.975234] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.549 [2024-07-13 16:36:48.975844] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.549 [2024-07-13 16:36:48.976026] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:17.549 [2024-07-13 16:36:48.976214] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:17.549 [2024-07-13 16:36:48.976347] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:17.549 pt2 00:19:17.549 16:36:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:17.549 16:36:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:17.549 16:36:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:17.808 [2024-07-13 16:36:49.194669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:17.808 [2024-07-13 16:36:49.195117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.808 [2024-07-13 16:36:49.195199] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:17.808 [2024-07-13 16:36:49.195443] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.808 [2024-07-13 16:36:49.196007] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.808 [2024-07-13 16:36:49.196193] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:17.808 [2024-07-13 16:36:49.196434] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:17.808 [2024-07-13 16:36:49.196563] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:17.808 pt3 00:19:17.808 16:36:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:17.808 16:36:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:17.808 16:36:49 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:18.066 [2024-07-13 16:36:49.478715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:18.066 [2024-07-13 16:36:49.479134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.066 [2024-07-13 16:36:49.479241] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:18.066 [2024-07-13 16:36:49.479376] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.066 [2024-07-13 16:36:49.479932] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.066 [2024-07-13 16:36:49.480120] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:18.066 [2024-07-13 16:36:49.480353] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:18.067 [2024-07-13 16:36:49.480477] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:18.067 [2024-07-13 16:36:49.480751] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:18.067 [2024-07-13 16:36:49.480874] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:18.067 [2024-07-13 16:36:49.481031] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:19:18.067 [2024-07-13 16:36:49.481611] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:18.067 [2024-07-13 16:36:49.481734] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:18.067 [2024-07-13 16:36:49.481948] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.067 pt4 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.067 16:36:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.326 16:36:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:18.326 "name": "raid_bdev1", 00:19:18.326 "uuid": "d38b407f-7840-4267-a036-cfa1899152e7", 00:19:18.326 "strip_size_kb": 0, 00:19:18.326 "state": "online", 00:19:18.326 "raid_level": "raid1", 00:19:18.326 "superblock": true, 00:19:18.326 "num_base_bdevs": 4, 00:19:18.326 "num_base_bdevs_discovered": 4, 00:19:18.326 "num_base_bdevs_operational": 4, 00:19:18.326 "base_bdevs_list": [ 00:19:18.326 { 00:19:18.326 "name": "pt1", 00:19:18.326 "uuid": "5ca5f0fa-2fca-58f4-8c8a-657f86195c28", 00:19:18.326 "is_configured": true, 00:19:18.326 "data_offset": 2048, 00:19:18.326 "data_size": 63488 00:19:18.326 }, 00:19:18.326 { 00:19:18.326 "name": "pt2", 00:19:18.326 "uuid": "e7b7dfe1-69eb-5c1b-a97b-aedac00fa807", 00:19:18.326 "is_configured": true, 00:19:18.326 "data_offset": 2048, 00:19:18.326 "data_size": 63488 00:19:18.326 }, 00:19:18.326 { 00:19:18.326 "name": "pt3", 00:19:18.326 "uuid": "df251a19-9d8f-5d32-903f-22d22f15ee06", 00:19:18.326 "is_configured": true, 00:19:18.326 "data_offset": 2048, 00:19:18.326 "data_size": 63488 00:19:18.326 }, 00:19:18.326 { 00:19:18.326 "name": "pt4", 00:19:18.326 "uuid": "50a9b798-1a79-5d42-967e-dbd7bac618af", 00:19:18.326 "is_configured": true, 00:19:18.326 "data_offset": 2048, 00:19:18.326 "data_size": 63488 00:19:18.326 } 00:19:18.326 ] 00:19:18.326 }' 00:19:18.326 16:36:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:18.326 16:36:49 -- common/autotest_common.sh@10 -- # set +x 00:19:19.261 16:36:50 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:19.261 16:36:50 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:19.261 [2024-07-13 16:36:50.643199] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.261 16:36:50 -- bdev/bdev_raid.sh@430 -- # '[' d38b407f-7840-4267-a036-cfa1899152e7 '!=' d38b407f-7840-4267-a036-cfa1899152e7 ']' 00:19:19.261 16:36:50 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:19:19.261 16:36:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:19.261 16:36:50 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:19.261 16:36:50 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:19.519 [2024-07-13 16:36:50.855061] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:19.519 16:36:50 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:19.519 16:36:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:19.519 16:36:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:19.519 16:36:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:19.519 16:36:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:19.519 16:36:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:19.519 16:36:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:19.519 16:36:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:19.519 16:36:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:19.519 16:36:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:19.519 16:36:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.519 16:36:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.777 16:36:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:19.777 "name": "raid_bdev1", 00:19:19.777 "uuid": "d38b407f-7840-4267-a036-cfa1899152e7", 00:19:19.777 "strip_size_kb": 0, 00:19:19.777 "state": "online", 00:19:19.777 "raid_level": "raid1", 00:19:19.777 "superblock": true, 00:19:19.777 "num_base_bdevs": 4, 00:19:19.777 "num_base_bdevs_discovered": 3, 00:19:19.777 "num_base_bdevs_operational": 3, 00:19:19.777 "base_bdevs_list": [ 00:19:19.777 { 00:19:19.777 "name": null, 00:19:19.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.777 "is_configured": false, 00:19:19.777 "data_offset": 2048, 00:19:19.777 "data_size": 63488 00:19:19.777 }, 00:19:19.777 { 00:19:19.777 "name": "pt2", 00:19:19.777 "uuid": "e7b7dfe1-69eb-5c1b-a97b-aedac00fa807", 00:19:19.777 "is_configured": true, 00:19:19.777 "data_offset": 2048, 00:19:19.777 "data_size": 63488 00:19:19.777 }, 00:19:19.777 { 00:19:19.777 "name": "pt3", 00:19:19.777 "uuid": "df251a19-9d8f-5d32-903f-22d22f15ee06", 00:19:19.777 "is_configured": true, 00:19:19.777 "data_offset": 2048, 00:19:19.777 "data_size": 63488 00:19:19.777 }, 00:19:19.777 { 00:19:19.777 "name": "pt4", 00:19:19.777 "uuid": "50a9b798-1a79-5d42-967e-dbd7bac618af", 00:19:19.777 "is_configured": true, 00:19:19.777 "data_offset": 2048, 00:19:19.777 "data_size": 63488 00:19:19.777 } 00:19:19.777 ] 00:19:19.777 }' 00:19:19.777 16:36:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:19.777 16:36:51 -- common/autotest_common.sh@10 -- # set +x 00:19:20.342 16:36:51 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:20.600 [2024-07-13 16:36:51.979256] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.600 [2024-07-13 16:36:51.979607] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.600 [2024-07-13 16:36:51.979821] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.600 [2024-07-13 16:36:51.980003] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.600 [2024-07-13 16:36:51.980091] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:20.600 16:36:52 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.600 16:36:52 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:19:20.857 16:36:52 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:19:20.857 16:36:52 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:19:20.857 16:36:52 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:19:20.857 16:36:52 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:20.857 16:36:52 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:21.115 16:36:52 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:21.115 16:36:52 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:21.115 16:36:52 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:21.372 16:36:52 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:21.372 16:36:52 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:21.372 16:36:52 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:21.629 16:36:53 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:21.629 16:36:53 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:21.629 16:36:53 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:19:21.629 16:36:53 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:21.629 16:36:53 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:21.886 [2024-07-13 16:36:53.223456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:21.886 [2024-07-13 16:36:53.223914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.886 [2024-07-13 16:36:53.223999] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:21.886 [2024-07-13 16:36:53.224104] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.886 [2024-07-13 16:36:53.227206] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.886 [2024-07-13 16:36:53.227494] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:21.886 [2024-07-13 16:36:53.227722] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:21.886 [2024-07-13 16:36:53.227899] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:21.886 pt2 00:19:21.886 16:36:53 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:21.886 16:36:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:21.886 16:36:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:21.886 16:36:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:21.886 16:36:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:21.886 16:36:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:21.886 16:36:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.886 16:36:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.886 16:36:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.886 16:36:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.886 16:36:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.886 16:36:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.144 16:36:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:22.144 "name": "raid_bdev1", 00:19:22.144 "uuid": "d38b407f-7840-4267-a036-cfa1899152e7", 00:19:22.144 "strip_size_kb": 0, 00:19:22.144 "state": "configuring", 00:19:22.144 "raid_level": "raid1", 00:19:22.144 "superblock": true, 00:19:22.144 "num_base_bdevs": 4, 00:19:22.144 "num_base_bdevs_discovered": 1, 00:19:22.144 "num_base_bdevs_operational": 3, 00:19:22.144 "base_bdevs_list": [ 00:19:22.144 { 00:19:22.144 "name": null, 00:19:22.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.144 "is_configured": false, 00:19:22.144 "data_offset": 2048, 00:19:22.144 "data_size": 63488 00:19:22.144 }, 00:19:22.144 { 00:19:22.144 "name": "pt2", 00:19:22.144 "uuid": "e7b7dfe1-69eb-5c1b-a97b-aedac00fa807", 00:19:22.144 "is_configured": true, 00:19:22.144 "data_offset": 2048, 00:19:22.144 "data_size": 63488 00:19:22.144 }, 00:19:22.144 { 00:19:22.144 "name": null, 00:19:22.144 "uuid": "df251a19-9d8f-5d32-903f-22d22f15ee06", 00:19:22.144 "is_configured": false, 00:19:22.144 "data_offset": 2048, 00:19:22.144 "data_size": 63488 00:19:22.144 }, 00:19:22.144 { 00:19:22.144 "name": null, 00:19:22.144 "uuid": "50a9b798-1a79-5d42-967e-dbd7bac618af", 00:19:22.144 "is_configured": false, 00:19:22.144 "data_offset": 2048, 00:19:22.144 "data_size": 63488 00:19:22.144 } 00:19:22.144 ] 00:19:22.144 }' 00:19:22.144 16:36:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:22.144 16:36:53 -- common/autotest_common.sh@10 -- # set +x 00:19:22.707 16:36:54 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:22.707 16:36:54 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:22.708 16:36:54 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:22.966 [2024-07-13 16:36:54.300135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:22.966 [2024-07-13 16:36:54.300619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.966 [2024-07-13 16:36:54.300761] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:22.966 [2024-07-13 16:36:54.301028] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.966 [2024-07-13 16:36:54.301622] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.966 [2024-07-13 16:36:54.301821] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:22.966 [2024-07-13 16:36:54.302033] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:22.966 [2024-07-13 16:36:54.302137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:22.966 pt3 00:19:22.966 16:36:54 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:22.966 16:36:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:22.966 16:36:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:22.966 16:36:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:22.966 16:36:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:22.966 16:36:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:22.966 16:36:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:22.966 16:36:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:22.966 16:36:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:22.967 16:36:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:22.967 16:36:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.967 16:36:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.225 16:36:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:23.225 "name": "raid_bdev1", 00:19:23.225 "uuid": "d38b407f-7840-4267-a036-cfa1899152e7", 00:19:23.225 "strip_size_kb": 0, 00:19:23.225 "state": "configuring", 00:19:23.225 "raid_level": "raid1", 00:19:23.225 "superblock": true, 00:19:23.225 "num_base_bdevs": 4, 00:19:23.225 "num_base_bdevs_discovered": 2, 00:19:23.225 "num_base_bdevs_operational": 3, 00:19:23.225 "base_bdevs_list": [ 00:19:23.225 { 00:19:23.225 "name": null, 00:19:23.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.225 "is_configured": false, 00:19:23.225 "data_offset": 2048, 00:19:23.225 "data_size": 63488 00:19:23.225 }, 00:19:23.225 { 00:19:23.225 "name": "pt2", 00:19:23.225 "uuid": "e7b7dfe1-69eb-5c1b-a97b-aedac00fa807", 00:19:23.225 "is_configured": true, 00:19:23.225 "data_offset": 2048, 00:19:23.225 "data_size": 63488 00:19:23.225 }, 00:19:23.225 { 00:19:23.225 "name": "pt3", 00:19:23.225 "uuid": "df251a19-9d8f-5d32-903f-22d22f15ee06", 00:19:23.225 "is_configured": true, 00:19:23.225 "data_offset": 2048, 00:19:23.225 "data_size": 63488 00:19:23.225 }, 00:19:23.225 { 00:19:23.225 "name": null, 00:19:23.225 "uuid": "50a9b798-1a79-5d42-967e-dbd7bac618af", 00:19:23.225 "is_configured": false, 00:19:23.225 "data_offset": 2048, 00:19:23.225 "data_size": 63488 00:19:23.225 } 00:19:23.225 ] 00:19:23.225 }' 00:19:23.225 16:36:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:23.225 16:36:54 -- common/autotest_common.sh@10 -- # set +x 00:19:23.791 16:36:55 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:24.049 16:36:55 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:24.049 16:36:55 -- bdev/bdev_raid.sh@462 -- # i=3 00:19:24.049 16:36:55 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:24.306 [2024-07-13 16:36:55.524410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:24.306 [2024-07-13 16:36:55.524903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.306 [2024-07-13 16:36:55.525006] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:24.306 [2024-07-13 16:36:55.525133] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.307 [2024-07-13 16:36:55.525734] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.307 [2024-07-13 16:36:55.525901] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:24.307 [2024-07-13 16:36:55.526121] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:24.307 [2024-07-13 16:36:55.526227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:24.307 [2024-07-13 16:36:55.526416] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:19:24.307 [2024-07-13 16:36:55.526509] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:24.307 [2024-07-13 16:36:55.526647] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:19:24.307 [2024-07-13 16:36:55.527197] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:19:24.307 [2024-07-13 16:36:55.527316] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:19:24.307 [2024-07-13 16:36:55.527526] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.307 pt4 00:19:24.307 16:36:55 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:24.307 16:36:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:24.307 16:36:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:24.307 16:36:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:24.307 16:36:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:24.307 16:36:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:24.307 16:36:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:24.307 16:36:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:24.307 16:36:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:24.307 16:36:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:24.307 16:36:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.307 16:36:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.566 16:36:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:24.566 "name": "raid_bdev1", 00:19:24.566 "uuid": "d38b407f-7840-4267-a036-cfa1899152e7", 00:19:24.566 "strip_size_kb": 0, 00:19:24.566 "state": "online", 00:19:24.566 "raid_level": "raid1", 00:19:24.566 "superblock": true, 00:19:24.566 "num_base_bdevs": 4, 00:19:24.566 "num_base_bdevs_discovered": 3, 00:19:24.566 "num_base_bdevs_operational": 3, 00:19:24.566 "base_bdevs_list": [ 00:19:24.566 { 00:19:24.566 "name": null, 00:19:24.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.566 "is_configured": false, 00:19:24.566 "data_offset": 2048, 00:19:24.566 "data_size": 63488 00:19:24.566 }, 00:19:24.566 { 00:19:24.566 "name": "pt2", 00:19:24.566 "uuid": "e7b7dfe1-69eb-5c1b-a97b-aedac00fa807", 00:19:24.566 "is_configured": true, 00:19:24.566 "data_offset": 2048, 00:19:24.566 "data_size": 63488 00:19:24.566 }, 00:19:24.566 { 00:19:24.566 "name": "pt3", 00:19:24.566 "uuid": "df251a19-9d8f-5d32-903f-22d22f15ee06", 00:19:24.566 "is_configured": true, 00:19:24.566 "data_offset": 2048, 00:19:24.566 "data_size": 63488 00:19:24.566 }, 00:19:24.566 { 00:19:24.566 "name": "pt4", 00:19:24.566 "uuid": "50a9b798-1a79-5d42-967e-dbd7bac618af", 00:19:24.566 "is_configured": true, 00:19:24.566 "data_offset": 2048, 00:19:24.566 "data_size": 63488 00:19:24.566 } 00:19:24.566 ] 00:19:24.566 }' 00:19:24.566 16:36:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:24.566 16:36:55 -- common/autotest_common.sh@10 -- # set +x 00:19:25.133 16:36:56 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:19:25.133 16:36:56 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:25.133 [2024-07-13 16:36:56.524772] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:25.133 [2024-07-13 16:36:56.525119] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.133 [2024-07-13 16:36:56.525353] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.133 [2024-07-13 16:36:56.525551] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.133 [2024-07-13 16:36:56.525659] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:19:25.133 16:36:56 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.133 16:36:56 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:19:25.392 16:36:56 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:19:25.392 16:36:56 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:19:25.392 16:36:56 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:25.649 [2024-07-13 16:36:57.008835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:25.649 [2024-07-13 16:36:57.009256] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.649 [2024-07-13 16:36:57.009357] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:25.649 [2024-07-13 16:36:57.009464] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.649 [2024-07-13 16:36:57.012549] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.649 [2024-07-13 16:36:57.012826] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:25.649 [2024-07-13 16:36:57.013085] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:25.649 [2024-07-13 16:36:57.013226] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:25.649 pt1 00:19:25.649 16:36:57 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:25.649 16:36:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:25.649 16:36:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:25.649 16:36:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:25.649 16:36:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:25.649 16:36:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:25.649 16:36:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:25.649 16:36:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:25.649 16:36:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:25.649 16:36:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:25.649 16:36:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.649 16:36:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.907 16:36:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:25.907 "name": "raid_bdev1", 00:19:25.907 "uuid": "d38b407f-7840-4267-a036-cfa1899152e7", 00:19:25.907 "strip_size_kb": 0, 00:19:25.907 "state": "configuring", 00:19:25.907 "raid_level": "raid1", 00:19:25.907 "superblock": true, 00:19:25.907 "num_base_bdevs": 4, 00:19:25.907 "num_base_bdevs_discovered": 1, 00:19:25.907 "num_base_bdevs_operational": 4, 00:19:25.907 "base_bdevs_list": [ 00:19:25.907 { 00:19:25.907 "name": "pt1", 00:19:25.907 "uuid": "5ca5f0fa-2fca-58f4-8c8a-657f86195c28", 00:19:25.907 "is_configured": true, 00:19:25.907 "data_offset": 2048, 00:19:25.907 "data_size": 63488 00:19:25.907 }, 00:19:25.907 { 00:19:25.907 "name": null, 00:19:25.907 "uuid": "e7b7dfe1-69eb-5c1b-a97b-aedac00fa807", 00:19:25.907 "is_configured": false, 00:19:25.907 "data_offset": 2048, 00:19:25.907 "data_size": 63488 00:19:25.907 }, 00:19:25.907 { 00:19:25.907 "name": null, 00:19:25.907 "uuid": "df251a19-9d8f-5d32-903f-22d22f15ee06", 00:19:25.907 "is_configured": false, 00:19:25.907 "data_offset": 2048, 00:19:25.907 "data_size": 63488 00:19:25.907 }, 00:19:25.907 { 00:19:25.907 "name": null, 00:19:25.907 "uuid": "50a9b798-1a79-5d42-967e-dbd7bac618af", 00:19:25.907 "is_configured": false, 00:19:25.907 "data_offset": 2048, 00:19:25.907 "data_size": 63488 00:19:25.907 } 00:19:25.907 ] 00:19:25.907 }' 00:19:25.907 16:36:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:25.907 16:36:57 -- common/autotest_common.sh@10 -- # set +x 00:19:26.473 16:36:57 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:19:26.474 16:36:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:26.474 16:36:57 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:26.733 16:36:58 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:26.733 16:36:58 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:26.733 16:36:58 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:26.990 16:36:58 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:26.990 16:36:58 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:26.990 16:36:58 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:27.248 16:36:58 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:27.248 16:36:58 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:27.248 16:36:58 -- bdev/bdev_raid.sh@489 -- # i=3 00:19:27.248 16:36:58 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:27.506 [2024-07-13 16:36:58.809601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:27.506 [2024-07-13 16:36:58.809731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.506 [2024-07-13 16:36:58.809778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:27.506 [2024-07-13 16:36:58.809813] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.506 [2024-07-13 16:36:58.810334] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.506 [2024-07-13 16:36:58.810389] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:27.506 [2024-07-13 16:36:58.810480] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:27.506 [2024-07-13 16:36:58.810493] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:27.506 [2024-07-13 16:36:58.810502] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:27.506 [2024-07-13 16:36:58.810525] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:19:27.506 [2024-07-13 16:36:58.810614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:27.506 pt4 00:19:27.506 16:36:58 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:27.506 16:36:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:27.506 16:36:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:27.506 16:36:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:27.506 16:36:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:27.506 16:36:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:27.506 16:36:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:27.506 16:36:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:27.506 16:36:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:27.506 16:36:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:27.506 16:36:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.506 16:36:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.764 16:36:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:27.764 "name": "raid_bdev1", 00:19:27.764 "uuid": "d38b407f-7840-4267-a036-cfa1899152e7", 00:19:27.764 "strip_size_kb": 0, 00:19:27.764 "state": "configuring", 00:19:27.764 "raid_level": "raid1", 00:19:27.764 "superblock": true, 00:19:27.765 "num_base_bdevs": 4, 00:19:27.765 "num_base_bdevs_discovered": 1, 00:19:27.765 "num_base_bdevs_operational": 3, 00:19:27.765 "base_bdevs_list": [ 00:19:27.765 { 00:19:27.765 "name": null, 00:19:27.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.765 "is_configured": false, 00:19:27.765 "data_offset": 2048, 00:19:27.765 "data_size": 63488 00:19:27.765 }, 00:19:27.765 { 00:19:27.765 "name": null, 00:19:27.765 "uuid": "e7b7dfe1-69eb-5c1b-a97b-aedac00fa807", 00:19:27.765 "is_configured": false, 00:19:27.765 "data_offset": 2048, 00:19:27.765 "data_size": 63488 00:19:27.765 }, 00:19:27.765 { 00:19:27.765 "name": null, 00:19:27.765 "uuid": "df251a19-9d8f-5d32-903f-22d22f15ee06", 00:19:27.765 "is_configured": false, 00:19:27.765 "data_offset": 2048, 00:19:27.765 "data_size": 63488 00:19:27.765 }, 00:19:27.765 { 00:19:27.765 "name": "pt4", 00:19:27.765 "uuid": "50a9b798-1a79-5d42-967e-dbd7bac618af", 00:19:27.765 "is_configured": true, 00:19:27.765 "data_offset": 2048, 00:19:27.765 "data_size": 63488 00:19:27.765 } 00:19:27.765 ] 00:19:27.765 }' 00:19:27.765 16:36:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:27.765 16:36:59 -- common/autotest_common.sh@10 -- # set +x 00:19:28.329 16:36:59 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:19:28.329 16:36:59 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:28.329 16:36:59 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:28.587 [2024-07-13 16:36:59.985902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:28.587 [2024-07-13 16:36:59.986056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.587 [2024-07-13 16:36:59.986101] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:28.587 [2024-07-13 16:36:59.986133] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.587 [2024-07-13 16:36:59.986668] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.587 [2024-07-13 16:36:59.986722] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:28.587 [2024-07-13 16:36:59.986826] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:28.587 [2024-07-13 16:36:59.986853] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:28.587 pt2 00:19:28.587 16:37:00 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:28.587 16:37:00 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:28.587 16:37:00 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:28.846 [2024-07-13 16:37:00.225939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:28.846 [2024-07-13 16:37:00.226106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.846 [2024-07-13 16:37:00.226155] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:28.846 [2024-07-13 16:37:00.226189] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.846 [2024-07-13 16:37:00.226726] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.846 [2024-07-13 16:37:00.226779] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:28.846 [2024-07-13 16:37:00.226881] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:28.846 [2024-07-13 16:37:00.226907] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:28.846 [2024-07-13 16:37:00.227055] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:19:28.846 [2024-07-13 16:37:00.227064] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:28.846 [2024-07-13 16:37:00.227147] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:19:28.846 [2024-07-13 16:37:00.227485] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:19:28.846 [2024-07-13 16:37:00.227497] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:19:28.846 [2024-07-13 16:37:00.227607] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.846 pt3 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.846 16:37:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.104 16:37:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:29.104 "name": "raid_bdev1", 00:19:29.104 "uuid": "d38b407f-7840-4267-a036-cfa1899152e7", 00:19:29.104 "strip_size_kb": 0, 00:19:29.104 "state": "online", 00:19:29.104 "raid_level": "raid1", 00:19:29.104 "superblock": true, 00:19:29.104 "num_base_bdevs": 4, 00:19:29.104 "num_base_bdevs_discovered": 3, 00:19:29.104 "num_base_bdevs_operational": 3, 00:19:29.104 "base_bdevs_list": [ 00:19:29.104 { 00:19:29.104 "name": null, 00:19:29.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.104 "is_configured": false, 00:19:29.104 "data_offset": 2048, 00:19:29.104 "data_size": 63488 00:19:29.104 }, 00:19:29.104 { 00:19:29.104 "name": "pt2", 00:19:29.104 "uuid": "e7b7dfe1-69eb-5c1b-a97b-aedac00fa807", 00:19:29.104 "is_configured": true, 00:19:29.104 "data_offset": 2048, 00:19:29.104 "data_size": 63488 00:19:29.104 }, 00:19:29.104 { 00:19:29.104 "name": "pt3", 00:19:29.104 "uuid": "df251a19-9d8f-5d32-903f-22d22f15ee06", 00:19:29.104 "is_configured": true, 00:19:29.104 "data_offset": 2048, 00:19:29.104 "data_size": 63488 00:19:29.104 }, 00:19:29.104 { 00:19:29.104 "name": "pt4", 00:19:29.104 "uuid": "50a9b798-1a79-5d42-967e-dbd7bac618af", 00:19:29.104 "is_configured": true, 00:19:29.104 "data_offset": 2048, 00:19:29.104 "data_size": 63488 00:19:29.104 } 00:19:29.104 ] 00:19:29.104 }' 00:19:29.104 16:37:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:29.104 16:37:00 -- common/autotest_common.sh@10 -- # set +x 00:19:30.039 16:37:01 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:30.039 16:37:01 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:19:30.039 [2024-07-13 16:37:01.409165] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.039 16:37:01 -- bdev/bdev_raid.sh@506 -- # '[' d38b407f-7840-4267-a036-cfa1899152e7 '!=' d38b407f-7840-4267-a036-cfa1899152e7 ']' 00:19:30.039 16:37:01 -- bdev/bdev_raid.sh@511 -- # killprocess 132289 00:19:30.039 16:37:01 -- common/autotest_common.sh@926 -- # '[' -z 132289 ']' 00:19:30.039 16:37:01 -- common/autotest_common.sh@930 -- # kill -0 132289 00:19:30.039 16:37:01 -- common/autotest_common.sh@931 -- # uname 00:19:30.039 16:37:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:30.039 16:37:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132289 00:19:30.039 16:37:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:30.039 killing process with pid 132289 00:19:30.039 16:37:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:30.039 16:37:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132289' 00:19:30.039 16:37:01 -- common/autotest_common.sh@945 -- # kill 132289 00:19:30.039 [2024-07-13 16:37:01.470010] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:30.039 16:37:01 -- common/autotest_common.sh@950 -- # wait 132289 00:19:30.039 [2024-07-13 16:37:01.470137] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.039 [2024-07-13 16:37:01.470238] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:30.039 [2024-07-13 16:37:01.470249] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:19:30.298 [2024-07-13 16:37:01.558400] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:30.556 16:37:01 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:30.556 00:19:30.556 real 0m22.243s 00:19:30.556 user 0m40.285s 00:19:30.556 sys 0m3.823s 00:19:30.556 16:37:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.556 16:37:01 -- common/autotest_common.sh@10 -- # set +x 00:19:30.556 ************************************ 00:19:30.556 END TEST raid_superblock_test 00:19:30.556 ************************************ 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:19:30.814 16:37:02 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:30.814 16:37:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:30.814 16:37:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.814 ************************************ 00:19:30.814 START TEST raid_rebuild_test 00:19:30.814 ************************************ 00:19:30.814 16:37:02 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:30.814 16:37:02 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:30.815 16:37:02 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:30.815 16:37:02 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:30.815 16:37:02 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:30.815 16:37:02 -- bdev/bdev_raid.sh@544 -- # raid_pid=132969 00:19:30.815 16:37:02 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:30.815 16:37:02 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132969 /var/tmp/spdk-raid.sock 00:19:30.815 16:37:02 -- common/autotest_common.sh@819 -- # '[' -z 132969 ']' 00:19:30.815 16:37:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:30.815 16:37:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:30.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:30.815 16:37:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:30.815 16:37:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:30.815 16:37:02 -- common/autotest_common.sh@10 -- # set +x 00:19:30.815 [2024-07-13 16:37:02.129159] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:30.815 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:30.815 Zero copy mechanism will not be used. 00:19:30.815 [2024-07-13 16:37:02.129462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132969 ] 00:19:31.073 [2024-07-13 16:37:02.291947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.073 [2024-07-13 16:37:02.381065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.073 [2024-07-13 16:37:02.467518] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.639 16:37:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:31.639 16:37:03 -- common/autotest_common.sh@852 -- # return 0 00:19:31.639 16:37:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:31.639 16:37:03 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:31.639 16:37:03 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:31.896 BaseBdev1 00:19:32.154 16:37:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:32.154 16:37:03 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:32.154 16:37:03 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:32.154 BaseBdev2 00:19:32.154 16:37:03 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:32.719 spare_malloc 00:19:32.719 16:37:03 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:32.719 spare_delay 00:19:32.719 16:37:04 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:32.979 [2024-07-13 16:37:04.357247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:32.979 [2024-07-13 16:37:04.357418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.979 [2024-07-13 16:37:04.357469] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:32.979 [2024-07-13 16:37:04.357537] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.979 [2024-07-13 16:37:04.360805] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.979 [2024-07-13 16:37:04.360892] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:32.979 spare 00:19:32.979 16:37:04 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:33.238 [2024-07-13 16:37:04.625401] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:33.238 [2024-07-13 16:37:04.628041] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.238 [2024-07-13 16:37:04.628170] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:33.238 [2024-07-13 16:37:04.628181] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:33.238 [2024-07-13 16:37:04.628406] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:19:33.238 [2024-07-13 16:37:04.628903] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:33.238 [2024-07-13 16:37:04.628923] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:19:33.238 [2024-07-13 16:37:04.629134] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.238 16:37:04 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:33.238 16:37:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:33.238 16:37:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:33.238 16:37:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:33.238 16:37:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:33.238 16:37:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:33.238 16:37:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:33.238 16:37:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:33.238 16:37:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:33.238 16:37:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:33.238 16:37:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.238 16:37:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.497 16:37:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:33.497 "name": "raid_bdev1", 00:19:33.497 "uuid": "1044c040-af55-45b7-aad1-35836bfc893c", 00:19:33.497 "strip_size_kb": 0, 00:19:33.497 "state": "online", 00:19:33.497 "raid_level": "raid1", 00:19:33.497 "superblock": false, 00:19:33.497 "num_base_bdevs": 2, 00:19:33.497 "num_base_bdevs_discovered": 2, 00:19:33.497 "num_base_bdevs_operational": 2, 00:19:33.497 "base_bdevs_list": [ 00:19:33.497 { 00:19:33.497 "name": "BaseBdev1", 00:19:33.497 "uuid": "2f27387a-77a3-46c1-9398-532b2f5cd37c", 00:19:33.497 "is_configured": true, 00:19:33.497 "data_offset": 0, 00:19:33.497 "data_size": 65536 00:19:33.497 }, 00:19:33.497 { 00:19:33.497 "name": "BaseBdev2", 00:19:33.497 "uuid": "7a504b0e-2895-403d-adcb-416c54681ead", 00:19:33.497 "is_configured": true, 00:19:33.497 "data_offset": 0, 00:19:33.497 "data_size": 65536 00:19:33.497 } 00:19:33.497 ] 00:19:33.497 }' 00:19:33.497 16:37:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:33.497 16:37:04 -- common/autotest_common.sh@10 -- # set +x 00:19:34.065 16:37:05 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:34.065 16:37:05 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:34.324 [2024-07-13 16:37:05.697825] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.324 16:37:05 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:34.324 16:37:05 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.324 16:37:05 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:34.583 16:37:05 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:34.583 16:37:05 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:34.583 16:37:05 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:34.583 16:37:05 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:34.583 16:37:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:34.583 16:37:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:34.583 16:37:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:34.583 16:37:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:34.583 16:37:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:34.583 16:37:05 -- bdev/nbd_common.sh@12 -- # local i 00:19:34.583 16:37:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:34.583 16:37:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:34.583 16:37:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:34.841 [2024-07-13 16:37:06.173722] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:34.841 /dev/nbd0 00:19:34.841 16:37:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:34.842 16:37:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:34.842 16:37:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:34.842 16:37:06 -- common/autotest_common.sh@857 -- # local i 00:19:34.842 16:37:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:34.842 16:37:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:34.842 16:37:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:34.842 16:37:06 -- common/autotest_common.sh@861 -- # break 00:19:34.842 16:37:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:34.842 16:37:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:34.842 16:37:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.842 1+0 records in 00:19:34.842 1+0 records out 00:19:34.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294658 s, 13.9 MB/s 00:19:34.842 16:37:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.842 16:37:06 -- common/autotest_common.sh@874 -- # size=4096 00:19:34.842 16:37:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.842 16:37:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:34.842 16:37:06 -- common/autotest_common.sh@877 -- # return 0 00:19:34.842 16:37:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.842 16:37:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:34.842 16:37:06 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:34.842 16:37:06 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:34.842 16:37:06 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:40.114 65536+0 records in 00:19:40.114 65536+0 records out 00:19:40.114 33554432 bytes (34 MB, 32 MiB) copied, 4.75129 s, 7.1 MB/s 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@51 -- # local i 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:40.114 [2024-07-13 16:37:11.276229] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@41 -- # break 00:19:40.114 16:37:11 -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:40.114 [2024-07-13 16:37:11.479836] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.114 16:37:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.372 16:37:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.372 "name": "raid_bdev1", 00:19:40.372 "uuid": "1044c040-af55-45b7-aad1-35836bfc893c", 00:19:40.372 "strip_size_kb": 0, 00:19:40.372 "state": "online", 00:19:40.372 "raid_level": "raid1", 00:19:40.372 "superblock": false, 00:19:40.372 "num_base_bdevs": 2, 00:19:40.372 "num_base_bdevs_discovered": 1, 00:19:40.372 "num_base_bdevs_operational": 1, 00:19:40.372 "base_bdevs_list": [ 00:19:40.372 { 00:19:40.372 "name": null, 00:19:40.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.372 "is_configured": false, 00:19:40.372 "data_offset": 0, 00:19:40.372 "data_size": 65536 00:19:40.372 }, 00:19:40.372 { 00:19:40.372 "name": "BaseBdev2", 00:19:40.372 "uuid": "7a504b0e-2895-403d-adcb-416c54681ead", 00:19:40.372 "is_configured": true, 00:19:40.372 "data_offset": 0, 00:19:40.373 "data_size": 65536 00:19:40.373 } 00:19:40.373 ] 00:19:40.373 }' 00:19:40.373 16:37:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.373 16:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:40.941 16:37:12 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:41.199 [2024-07-13 16:37:12.624042] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:41.199 [2024-07-13 16:37:12.624124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:41.199 [2024-07-13 16:37:12.632218] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d05ee0 00:19:41.199 [2024-07-13 16:37:12.635006] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:41.199 16:37:12 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:42.578 16:37:13 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.578 16:37:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:42.578 16:37:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:42.578 16:37:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:42.578 16:37:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:42.578 16:37:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.578 16:37:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.578 16:37:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:42.578 "name": "raid_bdev1", 00:19:42.578 "uuid": "1044c040-af55-45b7-aad1-35836bfc893c", 00:19:42.578 "strip_size_kb": 0, 00:19:42.578 "state": "online", 00:19:42.578 "raid_level": "raid1", 00:19:42.578 "superblock": false, 00:19:42.578 "num_base_bdevs": 2, 00:19:42.578 "num_base_bdevs_discovered": 2, 00:19:42.578 "num_base_bdevs_operational": 2, 00:19:42.578 "process": { 00:19:42.578 "type": "rebuild", 00:19:42.578 "target": "spare", 00:19:42.578 "progress": { 00:19:42.578 "blocks": 24576, 00:19:42.578 "percent": 37 00:19:42.578 } 00:19:42.578 }, 00:19:42.578 "base_bdevs_list": [ 00:19:42.578 { 00:19:42.578 "name": "spare", 00:19:42.578 "uuid": "f50d3809-a79a-5c72-9fe1-7a045eb9e816", 00:19:42.578 "is_configured": true, 00:19:42.578 "data_offset": 0, 00:19:42.578 "data_size": 65536 00:19:42.578 }, 00:19:42.578 { 00:19:42.578 "name": "BaseBdev2", 00:19:42.578 "uuid": "7a504b0e-2895-403d-adcb-416c54681ead", 00:19:42.578 "is_configured": true, 00:19:42.578 "data_offset": 0, 00:19:42.578 "data_size": 65536 00:19:42.578 } 00:19:42.578 ] 00:19:42.578 }' 00:19:42.578 16:37:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:42.578 16:37:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.578 16:37:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:42.578 16:37:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.578 16:37:14 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:42.836 [2024-07-13 16:37:14.268998] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.094 [2024-07-13 16:37:14.349715] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:43.094 [2024-07-13 16:37:14.349883] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.094 16:37:14 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.094 16:37:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:43.094 16:37:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:43.094 16:37:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:43.094 16:37:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:43.094 16:37:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:43.094 16:37:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:43.094 16:37:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:43.094 16:37:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:43.094 16:37:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:43.094 16:37:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.094 16:37:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.353 16:37:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:43.353 "name": "raid_bdev1", 00:19:43.353 "uuid": "1044c040-af55-45b7-aad1-35836bfc893c", 00:19:43.353 "strip_size_kb": 0, 00:19:43.353 "state": "online", 00:19:43.353 "raid_level": "raid1", 00:19:43.353 "superblock": false, 00:19:43.353 "num_base_bdevs": 2, 00:19:43.353 "num_base_bdevs_discovered": 1, 00:19:43.353 "num_base_bdevs_operational": 1, 00:19:43.353 "base_bdevs_list": [ 00:19:43.353 { 00:19:43.353 "name": null, 00:19:43.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.353 "is_configured": false, 00:19:43.353 "data_offset": 0, 00:19:43.353 "data_size": 65536 00:19:43.353 }, 00:19:43.353 { 00:19:43.353 "name": "BaseBdev2", 00:19:43.353 "uuid": "7a504b0e-2895-403d-adcb-416c54681ead", 00:19:43.353 "is_configured": true, 00:19:43.353 "data_offset": 0, 00:19:43.353 "data_size": 65536 00:19:43.353 } 00:19:43.353 ] 00:19:43.353 }' 00:19:43.353 16:37:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:43.353 16:37:14 -- common/autotest_common.sh@10 -- # set +x 00:19:43.918 16:37:15 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:43.918 16:37:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:43.918 16:37:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:43.918 16:37:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:43.918 16:37:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:43.918 16:37:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.918 16:37:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.176 16:37:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:44.176 "name": "raid_bdev1", 00:19:44.176 "uuid": "1044c040-af55-45b7-aad1-35836bfc893c", 00:19:44.176 "strip_size_kb": 0, 00:19:44.176 "state": "online", 00:19:44.176 "raid_level": "raid1", 00:19:44.176 "superblock": false, 00:19:44.176 "num_base_bdevs": 2, 00:19:44.176 "num_base_bdevs_discovered": 1, 00:19:44.176 "num_base_bdevs_operational": 1, 00:19:44.176 "base_bdevs_list": [ 00:19:44.176 { 00:19:44.176 "name": null, 00:19:44.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.176 "is_configured": false, 00:19:44.176 "data_offset": 0, 00:19:44.176 "data_size": 65536 00:19:44.176 }, 00:19:44.176 { 00:19:44.176 "name": "BaseBdev2", 00:19:44.176 "uuid": "7a504b0e-2895-403d-adcb-416c54681ead", 00:19:44.176 "is_configured": true, 00:19:44.176 "data_offset": 0, 00:19:44.176 "data_size": 65536 00:19:44.176 } 00:19:44.176 ] 00:19:44.176 }' 00:19:44.176 16:37:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:44.176 16:37:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:44.176 16:37:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:44.176 16:37:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:44.176 16:37:15 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:44.742 [2024-07-13 16:37:15.912733] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:44.742 [2024-07-13 16:37:15.912826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:44.742 [2024-07-13 16:37:15.921046] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080 00:19:44.742 [2024-07-13 16:37:15.923750] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:44.742 16:37:15 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:45.678 16:37:16 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.678 16:37:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:45.678 16:37:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:45.678 16:37:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:45.678 16:37:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:45.678 16:37:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.678 16:37:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.936 16:37:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:45.937 "name": "raid_bdev1", 00:19:45.937 "uuid": "1044c040-af55-45b7-aad1-35836bfc893c", 00:19:45.937 "strip_size_kb": 0, 00:19:45.937 "state": "online", 00:19:45.937 "raid_level": "raid1", 00:19:45.937 "superblock": false, 00:19:45.937 "num_base_bdevs": 2, 00:19:45.937 "num_base_bdevs_discovered": 2, 00:19:45.937 "num_base_bdevs_operational": 2, 00:19:45.937 "process": { 00:19:45.937 "type": "rebuild", 00:19:45.937 "target": "spare", 00:19:45.937 "progress": { 00:19:45.937 "blocks": 24576, 00:19:45.937 "percent": 37 00:19:45.937 } 00:19:45.937 }, 00:19:45.937 "base_bdevs_list": [ 00:19:45.937 { 00:19:45.937 "name": "spare", 00:19:45.937 "uuid": "f50d3809-a79a-5c72-9fe1-7a045eb9e816", 00:19:45.937 "is_configured": true, 00:19:45.937 "data_offset": 0, 00:19:45.937 "data_size": 65536 00:19:45.937 }, 00:19:45.937 { 00:19:45.937 "name": "BaseBdev2", 00:19:45.937 "uuid": "7a504b0e-2895-403d-adcb-416c54681ead", 00:19:45.937 "is_configured": true, 00:19:45.937 "data_offset": 0, 00:19:45.937 "data_size": 65536 00:19:45.937 } 00:19:45.937 ] 00:19:45.937 }' 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@657 -- # local timeout=373 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.937 16:37:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.195 16:37:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:46.195 "name": "raid_bdev1", 00:19:46.195 "uuid": "1044c040-af55-45b7-aad1-35836bfc893c", 00:19:46.195 "strip_size_kb": 0, 00:19:46.195 "state": "online", 00:19:46.195 "raid_level": "raid1", 00:19:46.195 "superblock": false, 00:19:46.195 "num_base_bdevs": 2, 00:19:46.195 "num_base_bdevs_discovered": 2, 00:19:46.195 "num_base_bdevs_operational": 2, 00:19:46.195 "process": { 00:19:46.195 "type": "rebuild", 00:19:46.195 "target": "spare", 00:19:46.195 "progress": { 00:19:46.195 "blocks": 32768, 00:19:46.195 "percent": 50 00:19:46.195 } 00:19:46.195 }, 00:19:46.195 "base_bdevs_list": [ 00:19:46.195 { 00:19:46.195 "name": "spare", 00:19:46.195 "uuid": "f50d3809-a79a-5c72-9fe1-7a045eb9e816", 00:19:46.195 "is_configured": true, 00:19:46.195 "data_offset": 0, 00:19:46.195 "data_size": 65536 00:19:46.195 }, 00:19:46.196 { 00:19:46.196 "name": "BaseBdev2", 00:19:46.196 "uuid": "7a504b0e-2895-403d-adcb-416c54681ead", 00:19:46.196 "is_configured": true, 00:19:46.196 "data_offset": 0, 00:19:46.196 "data_size": 65536 00:19:46.196 } 00:19:46.196 ] 00:19:46.196 }' 00:19:46.196 16:37:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:46.196 16:37:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:46.196 16:37:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:46.453 16:37:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:46.453 16:37:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:47.388 16:37:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:47.388 16:37:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.388 16:37:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:47.388 16:37:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:47.388 16:37:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:47.388 16:37:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:47.388 16:37:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.388 16:37:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.646 16:37:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:47.646 "name": "raid_bdev1", 00:19:47.646 "uuid": "1044c040-af55-45b7-aad1-35836bfc893c", 00:19:47.646 "strip_size_kb": 0, 00:19:47.646 "state": "online", 00:19:47.646 "raid_level": "raid1", 00:19:47.646 "superblock": false, 00:19:47.646 "num_base_bdevs": 2, 00:19:47.646 "num_base_bdevs_discovered": 2, 00:19:47.646 "num_base_bdevs_operational": 2, 00:19:47.646 "process": { 00:19:47.646 "type": "rebuild", 00:19:47.646 "target": "spare", 00:19:47.646 "progress": { 00:19:47.646 "blocks": 59392, 00:19:47.646 "percent": 90 00:19:47.646 } 00:19:47.646 }, 00:19:47.646 "base_bdevs_list": [ 00:19:47.646 { 00:19:47.646 "name": "spare", 00:19:47.646 "uuid": "f50d3809-a79a-5c72-9fe1-7a045eb9e816", 00:19:47.646 "is_configured": true, 00:19:47.646 "data_offset": 0, 00:19:47.646 "data_size": 65536 00:19:47.646 }, 00:19:47.646 { 00:19:47.646 "name": "BaseBdev2", 00:19:47.646 "uuid": "7a504b0e-2895-403d-adcb-416c54681ead", 00:19:47.646 "is_configured": true, 00:19:47.646 "data_offset": 0, 00:19:47.646 "data_size": 65536 00:19:47.646 } 00:19:47.646 ] 00:19:47.646 }' 00:19:47.646 16:37:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:47.646 16:37:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.646 16:37:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:47.646 16:37:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.646 16:37:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:47.905 [2024-07-13 16:37:19.148756] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:47.905 [2024-07-13 16:37:19.148888] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:47.905 [2024-07-13 16:37:19.149014] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.842 16:37:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:48.842 16:37:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.842 16:37:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:48.842 16:37:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:48.842 16:37:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:48.842 16:37:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:48.842 16:37:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.842 16:37:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.102 16:37:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:49.102 "name": "raid_bdev1", 00:19:49.102 "uuid": "1044c040-af55-45b7-aad1-35836bfc893c", 00:19:49.102 "strip_size_kb": 0, 00:19:49.102 "state": "online", 00:19:49.102 "raid_level": "raid1", 00:19:49.102 "superblock": false, 00:19:49.102 "num_base_bdevs": 2, 00:19:49.102 "num_base_bdevs_discovered": 2, 00:19:49.102 "num_base_bdevs_operational": 2, 00:19:49.102 "base_bdevs_list": [ 00:19:49.102 { 00:19:49.102 "name": "spare", 00:19:49.102 "uuid": "f50d3809-a79a-5c72-9fe1-7a045eb9e816", 00:19:49.102 "is_configured": true, 00:19:49.102 "data_offset": 0, 00:19:49.102 "data_size": 65536 00:19:49.102 }, 00:19:49.102 { 00:19:49.102 "name": "BaseBdev2", 00:19:49.102 "uuid": "7a504b0e-2895-403d-adcb-416c54681ead", 00:19:49.102 "is_configured": true, 00:19:49.102 "data_offset": 0, 00:19:49.102 "data_size": 65536 00:19:49.102 } 00:19:49.102 ] 00:19:49.102 }' 00:19:49.102 16:37:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:49.102 16:37:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:49.102 16:37:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:49.102 16:37:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:49.102 16:37:20 -- bdev/bdev_raid.sh@660 -- # break 00:19:49.102 16:37:20 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.102 16:37:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:49.102 16:37:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:49.102 16:37:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:49.102 16:37:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:49.102 16:37:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.102 16:37:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:49.361 "name": "raid_bdev1", 00:19:49.361 "uuid": "1044c040-af55-45b7-aad1-35836bfc893c", 00:19:49.361 "strip_size_kb": 0, 00:19:49.361 "state": "online", 00:19:49.361 "raid_level": "raid1", 00:19:49.361 "superblock": false, 00:19:49.361 "num_base_bdevs": 2, 00:19:49.361 "num_base_bdevs_discovered": 2, 00:19:49.361 "num_base_bdevs_operational": 2, 00:19:49.361 "base_bdevs_list": [ 00:19:49.361 { 00:19:49.361 "name": "spare", 00:19:49.361 "uuid": "f50d3809-a79a-5c72-9fe1-7a045eb9e816", 00:19:49.361 "is_configured": true, 00:19:49.361 "data_offset": 0, 00:19:49.361 "data_size": 65536 00:19:49.361 }, 00:19:49.361 { 00:19:49.361 "name": "BaseBdev2", 00:19:49.361 "uuid": "7a504b0e-2895-403d-adcb-416c54681ead", 00:19:49.361 "is_configured": true, 00:19:49.361 "data_offset": 0, 00:19:49.361 "data_size": 65536 00:19:49.361 } 00:19:49.361 ] 00:19:49.361 }' 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.361 16:37:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.620 16:37:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:49.620 "name": "raid_bdev1", 00:19:49.620 "uuid": "1044c040-af55-45b7-aad1-35836bfc893c", 00:19:49.620 "strip_size_kb": 0, 00:19:49.620 "state": "online", 00:19:49.620 "raid_level": "raid1", 00:19:49.620 "superblock": false, 00:19:49.620 "num_base_bdevs": 2, 00:19:49.620 "num_base_bdevs_discovered": 2, 00:19:49.620 "num_base_bdevs_operational": 2, 00:19:49.620 "base_bdevs_list": [ 00:19:49.620 { 00:19:49.620 "name": "spare", 00:19:49.620 "uuid": "f50d3809-a79a-5c72-9fe1-7a045eb9e816", 00:19:49.620 "is_configured": true, 00:19:49.620 "data_offset": 0, 00:19:49.620 "data_size": 65536 00:19:49.620 }, 00:19:49.620 { 00:19:49.620 "name": "BaseBdev2", 00:19:49.620 "uuid": "7a504b0e-2895-403d-adcb-416c54681ead", 00:19:49.620 "is_configured": true, 00:19:49.620 "data_offset": 0, 00:19:49.620 "data_size": 65536 00:19:49.620 } 00:19:49.620 ] 00:19:49.620 }' 00:19:49.620 16:37:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:49.620 16:37:21 -- common/autotest_common.sh@10 -- # set +x 00:19:50.557 16:37:21 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:50.557 [2024-07-13 16:37:21.917713] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:50.557 [2024-07-13 16:37:21.917772] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:50.557 [2024-07-13 16:37:21.917911] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:50.557 [2024-07-13 16:37:21.918014] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:50.557 [2024-07-13 16:37:21.918026] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:19:50.557 16:37:21 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:50.557 16:37:21 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.815 16:37:22 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:50.815 16:37:22 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:50.815 16:37:22 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:50.815 16:37:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:50.815 16:37:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:50.815 16:37:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:50.815 16:37:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:50.815 16:37:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:50.815 16:37:22 -- bdev/nbd_common.sh@12 -- # local i 00:19:50.815 16:37:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:50.815 16:37:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:50.815 16:37:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:51.074 /dev/nbd0 00:19:51.074 16:37:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:51.074 16:37:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:51.074 16:37:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:51.074 16:37:22 -- common/autotest_common.sh@857 -- # local i 00:19:51.074 16:37:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:51.074 16:37:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:51.074 16:37:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:51.074 16:37:22 -- common/autotest_common.sh@861 -- # break 00:19:51.074 16:37:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:51.074 16:37:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:51.074 16:37:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:51.074 1+0 records in 00:19:51.074 1+0 records out 00:19:51.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000862388 s, 4.7 MB/s 00:19:51.074 16:37:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.074 16:37:22 -- common/autotest_common.sh@874 -- # size=4096 00:19:51.074 16:37:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.074 16:37:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:51.074 16:37:22 -- common/autotest_common.sh@877 -- # return 0 00:19:51.074 16:37:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:51.074 16:37:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:51.074 16:37:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:51.333 /dev/nbd1 00:19:51.333 16:37:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:51.333 16:37:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:51.333 16:37:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:19:51.333 16:37:22 -- common/autotest_common.sh@857 -- # local i 00:19:51.333 16:37:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:51.333 16:37:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:51.333 16:37:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:19:51.333 16:37:22 -- common/autotest_common.sh@861 -- # break 00:19:51.333 16:37:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:51.333 16:37:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:51.333 16:37:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:51.333 1+0 records in 00:19:51.333 1+0 records out 00:19:51.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820643 s, 5.0 MB/s 00:19:51.333 16:37:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.333 16:37:22 -- common/autotest_common.sh@874 -- # size=4096 00:19:51.333 16:37:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.333 16:37:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:51.333 16:37:22 -- common/autotest_common.sh@877 -- # return 0 00:19:51.333 16:37:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:51.333 16:37:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:51.333 16:37:22 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:51.592 16:37:22 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:51.592 16:37:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:51.592 16:37:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:51.592 16:37:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:51.592 16:37:22 -- bdev/nbd_common.sh@51 -- # local i 00:19:51.592 16:37:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.592 16:37:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:51.850 16:37:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:51.850 16:37:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:51.850 16:37:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:51.850 16:37:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:51.850 16:37:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:51.850 16:37:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:51.850 16:37:23 -- bdev/nbd_common.sh@41 -- # break 00:19:51.850 16:37:23 -- bdev/nbd_common.sh@45 -- # return 0 00:19:51.850 16:37:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.850 16:37:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:52.108 16:37:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:52.108 16:37:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:52.108 16:37:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:52.108 16:37:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:52.108 16:37:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:52.108 16:37:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:52.108 16:37:23 -- bdev/nbd_common.sh@41 -- # break 00:19:52.108 16:37:23 -- bdev/nbd_common.sh@45 -- # return 0 00:19:52.108 16:37:23 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:19:52.108 16:37:23 -- bdev/bdev_raid.sh@709 -- # killprocess 132969 00:19:52.108 16:37:23 -- common/autotest_common.sh@926 -- # '[' -z 132969 ']' 00:19:52.108 16:37:23 -- common/autotest_common.sh@930 -- # kill -0 132969 00:19:52.108 16:37:23 -- common/autotest_common.sh@931 -- # uname 00:19:52.108 16:37:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:52.108 16:37:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132969 00:19:52.108 16:37:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:52.108 16:37:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:52.108 16:37:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132969' 00:19:52.108 killing process with pid 132969 00:19:52.108 16:37:23 -- common/autotest_common.sh@945 -- # kill 132969 00:19:52.108 Received shutdown signal, test time was about 60.000000 seconds 00:19:52.108 00:19:52.108 Latency(us) 00:19:52.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.108 =================================================================================================================== 00:19:52.108 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:52.108 [2024-07-13 16:37:23.379784] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:52.108 16:37:23 -- common/autotest_common.sh@950 -- # wait 132969 00:19:52.108 [2024-07-13 16:37:23.439687] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:52.676 00:19:52.676 real 0m21.836s 00:19:52.676 user 0m29.661s 00:19:52.676 sys 0m5.080s 00:19:52.676 16:37:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.676 16:37:23 -- common/autotest_common.sh@10 -- # set +x 00:19:52.676 ************************************ 00:19:52.676 END TEST raid_rebuild_test 00:19:52.676 ************************************ 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:19:52.676 16:37:23 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:52.676 16:37:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:52.676 16:37:23 -- common/autotest_common.sh@10 -- # set +x 00:19:52.676 ************************************ 00:19:52.676 START TEST raid_rebuild_test_sb 00:19:52.676 ************************************ 00:19:52.676 16:37:23 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@544 -- # raid_pid=133505 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133505 /var/tmp/spdk-raid.sock 00:19:52.676 16:37:23 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:52.676 16:37:23 -- common/autotest_common.sh@819 -- # '[' -z 133505 ']' 00:19:52.676 16:37:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:52.676 16:37:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:52.676 16:37:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:52.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:52.676 16:37:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:52.676 16:37:23 -- common/autotest_common.sh@10 -- # set +x 00:19:52.676 [2024-07-13 16:37:24.050457] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:52.676 [2024-07-13 16:37:24.050768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133505 ] 00:19:52.676 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:52.676 Zero copy mechanism will not be used. 00:19:52.935 [2024-07-13 16:37:24.208433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.935 [2024-07-13 16:37:24.291207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.935 [2024-07-13 16:37:24.372505] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:53.888 16:37:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:53.888 16:37:24 -- common/autotest_common.sh@852 -- # return 0 00:19:53.888 16:37:24 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:53.888 16:37:24 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:53.888 16:37:24 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:53.888 BaseBdev1_malloc 00:19:53.888 16:37:25 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:54.157 [2024-07-13 16:37:25.511576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:54.157 [2024-07-13 16:37:25.511763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.157 [2024-07-13 16:37:25.511824] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:19:54.157 [2024-07-13 16:37:25.511881] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.157 [2024-07-13 16:37:25.515089] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.157 [2024-07-13 16:37:25.515170] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:54.157 BaseBdev1 00:19:54.157 16:37:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:54.157 16:37:25 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:54.157 16:37:25 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:54.417 BaseBdev2_malloc 00:19:54.417 16:37:25 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:54.677 [2024-07-13 16:37:25.947919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:54.677 [2024-07-13 16:37:25.948065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.677 [2024-07-13 16:37:25.948113] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:19:54.677 [2024-07-13 16:37:25.948172] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.677 [2024-07-13 16:37:25.951077] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.677 [2024-07-13 16:37:25.951144] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:54.677 BaseBdev2 00:19:54.677 16:37:25 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:54.936 spare_malloc 00:19:54.937 16:37:26 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:55.195 spare_delay 00:19:55.195 16:37:26 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:55.195 [2024-07-13 16:37:26.634324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:55.195 [2024-07-13 16:37:26.634467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.195 [2024-07-13 16:37:26.634520] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:55.195 [2024-07-13 16:37:26.634572] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.196 [2024-07-13 16:37:26.637720] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.196 [2024-07-13 16:37:26.637799] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:55.196 spare 00:19:55.196 16:37:26 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:55.455 [2024-07-13 16:37:26.842647] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:55.455 [2024-07-13 16:37:26.845349] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:55.455 [2024-07-13 16:37:26.845616] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:19:55.455 [2024-07-13 16:37:26.845629] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:55.455 [2024-07-13 16:37:26.845829] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:55.455 [2024-07-13 16:37:26.846292] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:19:55.455 [2024-07-13 16:37:26.846314] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:19:55.455 [2024-07-13 16:37:26.846548] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.455 16:37:26 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:55.455 16:37:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:55.455 16:37:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:55.455 16:37:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:55.455 16:37:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:55.455 16:37:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:55.455 16:37:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:55.455 16:37:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:55.455 16:37:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:55.455 16:37:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:55.455 16:37:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.455 16:37:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.714 16:37:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:55.714 "name": "raid_bdev1", 00:19:55.714 "uuid": "a45f0502-ca00-46c6-a25c-d36cb66381b5", 00:19:55.714 "strip_size_kb": 0, 00:19:55.714 "state": "online", 00:19:55.714 "raid_level": "raid1", 00:19:55.714 "superblock": true, 00:19:55.714 "num_base_bdevs": 2, 00:19:55.714 "num_base_bdevs_discovered": 2, 00:19:55.714 "num_base_bdevs_operational": 2, 00:19:55.714 "base_bdevs_list": [ 00:19:55.714 { 00:19:55.714 "name": "BaseBdev1", 00:19:55.714 "uuid": "dc954fb3-fbb9-50fb-a73e-61edb7243412", 00:19:55.714 "is_configured": true, 00:19:55.714 "data_offset": 2048, 00:19:55.714 "data_size": 63488 00:19:55.714 }, 00:19:55.714 { 00:19:55.714 "name": "BaseBdev2", 00:19:55.714 "uuid": "88176f5c-3c9f-5190-b6c5-f10f00301642", 00:19:55.714 "is_configured": true, 00:19:55.714 "data_offset": 2048, 00:19:55.714 "data_size": 63488 00:19:55.714 } 00:19:55.714 ] 00:19:55.714 }' 00:19:55.714 16:37:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:55.714 16:37:27 -- common/autotest_common.sh@10 -- # set +x 00:19:56.283 16:37:27 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:56.283 16:37:27 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:56.543 [2024-07-13 16:37:27.882875] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:56.543 16:37:27 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:19:56.543 16:37:27 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.543 16:37:27 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:56.803 16:37:28 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:19:56.803 16:37:28 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:56.803 16:37:28 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:56.803 16:37:28 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:56.803 16:37:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:56.803 16:37:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:56.803 16:37:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:56.803 16:37:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:56.803 16:37:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:56.803 16:37:28 -- bdev/nbd_common.sh@12 -- # local i 00:19:56.803 16:37:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:56.803 16:37:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.803 16:37:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:57.061 [2024-07-13 16:37:28.350885] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:19:57.061 /dev/nbd0 00:19:57.061 16:37:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:57.061 16:37:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:57.061 16:37:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:57.061 16:37:28 -- common/autotest_common.sh@857 -- # local i 00:19:57.061 16:37:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:57.061 16:37:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:57.061 16:37:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:57.061 16:37:28 -- common/autotest_common.sh@861 -- # break 00:19:57.061 16:37:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:57.061 16:37:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:57.061 16:37:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.061 1+0 records in 00:19:57.061 1+0 records out 00:19:57.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360629 s, 11.4 MB/s 00:19:57.062 16:37:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.062 16:37:28 -- common/autotest_common.sh@874 -- # size=4096 00:19:57.062 16:37:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.062 16:37:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:57.062 16:37:28 -- common/autotest_common.sh@877 -- # return 0 00:19:57.062 16:37:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:57.062 16:37:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.062 16:37:28 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:57.062 16:37:28 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:57.062 16:37:28 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:02.334 63488+0 records in 00:20:02.334 63488+0 records out 00:20:02.334 32505856 bytes (33 MB, 31 MiB) copied, 4.92825 s, 6.6 MB/s 00:20:02.334 16:37:33 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@51 -- # local i 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:02.334 [2024-07-13 16:37:33.628475] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@41 -- # break 00:20:02.334 16:37:33 -- bdev/nbd_common.sh@45 -- # return 0 00:20:02.334 16:37:33 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:02.593 [2024-07-13 16:37:33.876029] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:02.593 16:37:33 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:02.593 16:37:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:02.593 16:37:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:02.593 16:37:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:02.593 16:37:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:02.593 16:37:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:02.593 16:37:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:02.593 16:37:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:02.593 16:37:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:02.593 16:37:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:02.593 16:37:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.593 16:37:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.880 16:37:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:02.880 "name": "raid_bdev1", 00:20:02.880 "uuid": "a45f0502-ca00-46c6-a25c-d36cb66381b5", 00:20:02.880 "strip_size_kb": 0, 00:20:02.880 "state": "online", 00:20:02.880 "raid_level": "raid1", 00:20:02.880 "superblock": true, 00:20:02.880 "num_base_bdevs": 2, 00:20:02.880 "num_base_bdevs_discovered": 1, 00:20:02.880 "num_base_bdevs_operational": 1, 00:20:02.880 "base_bdevs_list": [ 00:20:02.880 { 00:20:02.880 "name": null, 00:20:02.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.880 "is_configured": false, 00:20:02.880 "data_offset": 2048, 00:20:02.880 "data_size": 63488 00:20:02.880 }, 00:20:02.880 { 00:20:02.880 "name": "BaseBdev2", 00:20:02.880 "uuid": "88176f5c-3c9f-5190-b6c5-f10f00301642", 00:20:02.880 "is_configured": true, 00:20:02.880 "data_offset": 2048, 00:20:02.880 "data_size": 63488 00:20:02.880 } 00:20:02.880 ] 00:20:02.880 }' 00:20:02.880 16:37:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:02.880 16:37:34 -- common/autotest_common.sh@10 -- # set +x 00:20:03.446 16:37:34 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:03.703 [2024-07-13 16:37:34.988295] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:03.703 [2024-07-13 16:37:34.988394] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:03.703 [2024-07-13 16:37:34.996469] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e0e0 00:20:03.703 [2024-07-13 16:37:34.999211] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:03.703 16:37:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:04.634 16:37:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.634 16:37:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:04.634 16:37:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:04.634 16:37:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:04.634 16:37:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:04.634 16:37:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.634 16:37:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.891 16:37:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:04.891 "name": "raid_bdev1", 00:20:04.891 "uuid": "a45f0502-ca00-46c6-a25c-d36cb66381b5", 00:20:04.891 "strip_size_kb": 0, 00:20:04.891 "state": "online", 00:20:04.891 "raid_level": "raid1", 00:20:04.891 "superblock": true, 00:20:04.891 "num_base_bdevs": 2, 00:20:04.891 "num_base_bdevs_discovered": 2, 00:20:04.891 "num_base_bdevs_operational": 2, 00:20:04.891 "process": { 00:20:04.891 "type": "rebuild", 00:20:04.891 "target": "spare", 00:20:04.891 "progress": { 00:20:04.891 "blocks": 24576, 00:20:04.891 "percent": 38 00:20:04.891 } 00:20:04.891 }, 00:20:04.891 "base_bdevs_list": [ 00:20:04.891 { 00:20:04.891 "name": "spare", 00:20:04.891 "uuid": "2a81b8de-6726-50ee-ae6c-366b5db68949", 00:20:04.891 "is_configured": true, 00:20:04.891 "data_offset": 2048, 00:20:04.891 "data_size": 63488 00:20:04.891 }, 00:20:04.891 { 00:20:04.891 "name": "BaseBdev2", 00:20:04.891 "uuid": "88176f5c-3c9f-5190-b6c5-f10f00301642", 00:20:04.891 "is_configured": true, 00:20:04.891 "data_offset": 2048, 00:20:04.891 "data_size": 63488 00:20:04.891 } 00:20:04.891 ] 00:20:04.891 }' 00:20:04.891 16:37:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:04.891 16:37:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.891 16:37:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:04.891 16:37:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.891 16:37:36 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:05.149 [2024-07-13 16:37:36.584934] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:05.149 [2024-07-13 16:37:36.612907] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:05.149 [2024-07-13 16:37:36.613036] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.407 16:37:36 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:05.407 16:37:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:05.407 16:37:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:05.407 16:37:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:05.407 16:37:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:05.407 16:37:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:05.407 16:37:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:05.407 16:37:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:05.407 16:37:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:05.407 16:37:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:05.407 16:37:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.407 16:37:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.664 16:37:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.664 "name": "raid_bdev1", 00:20:05.664 "uuid": "a45f0502-ca00-46c6-a25c-d36cb66381b5", 00:20:05.664 "strip_size_kb": 0, 00:20:05.664 "state": "online", 00:20:05.664 "raid_level": "raid1", 00:20:05.664 "superblock": true, 00:20:05.664 "num_base_bdevs": 2, 00:20:05.664 "num_base_bdevs_discovered": 1, 00:20:05.664 "num_base_bdevs_operational": 1, 00:20:05.664 "base_bdevs_list": [ 00:20:05.664 { 00:20:05.664 "name": null, 00:20:05.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.664 "is_configured": false, 00:20:05.664 "data_offset": 2048, 00:20:05.664 "data_size": 63488 00:20:05.664 }, 00:20:05.664 { 00:20:05.664 "name": "BaseBdev2", 00:20:05.664 "uuid": "88176f5c-3c9f-5190-b6c5-f10f00301642", 00:20:05.664 "is_configured": true, 00:20:05.664 "data_offset": 2048, 00:20:05.664 "data_size": 63488 00:20:05.664 } 00:20:05.664 ] 00:20:05.664 }' 00:20:05.664 16:37:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.664 16:37:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.229 16:37:37 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:06.229 16:37:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:06.229 16:37:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:06.229 16:37:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:06.229 16:37:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:06.229 16:37:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.229 16:37:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.487 16:37:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:06.487 "name": "raid_bdev1", 00:20:06.487 "uuid": "a45f0502-ca00-46c6-a25c-d36cb66381b5", 00:20:06.487 "strip_size_kb": 0, 00:20:06.487 "state": "online", 00:20:06.487 "raid_level": "raid1", 00:20:06.487 "superblock": true, 00:20:06.487 "num_base_bdevs": 2, 00:20:06.487 "num_base_bdevs_discovered": 1, 00:20:06.487 "num_base_bdevs_operational": 1, 00:20:06.487 "base_bdevs_list": [ 00:20:06.487 { 00:20:06.487 "name": null, 00:20:06.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.487 "is_configured": false, 00:20:06.487 "data_offset": 2048, 00:20:06.487 "data_size": 63488 00:20:06.487 }, 00:20:06.487 { 00:20:06.487 "name": "BaseBdev2", 00:20:06.487 "uuid": "88176f5c-3c9f-5190-b6c5-f10f00301642", 00:20:06.487 "is_configured": true, 00:20:06.487 "data_offset": 2048, 00:20:06.487 "data_size": 63488 00:20:06.487 } 00:20:06.487 ] 00:20:06.487 }' 00:20:06.487 16:37:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:06.487 16:37:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:06.487 16:37:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:06.487 16:37:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:06.487 16:37:37 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:06.745 [2024-07-13 16:37:38.200736] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:06.745 [2024-07-13 16:37:38.200809] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:06.745 [2024-07-13 16:37:38.208832] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:20:06.745 [2024-07-13 16:37:38.211548] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:07.003 16:37:38 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:07.938 16:37:39 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.938 16:37:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:07.938 16:37:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:07.938 16:37:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:07.938 16:37:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:07.938 16:37:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.938 16:37:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:08.196 "name": "raid_bdev1", 00:20:08.196 "uuid": "a45f0502-ca00-46c6-a25c-d36cb66381b5", 00:20:08.196 "strip_size_kb": 0, 00:20:08.196 "state": "online", 00:20:08.196 "raid_level": "raid1", 00:20:08.196 "superblock": true, 00:20:08.196 "num_base_bdevs": 2, 00:20:08.196 "num_base_bdevs_discovered": 2, 00:20:08.196 "num_base_bdevs_operational": 2, 00:20:08.196 "process": { 00:20:08.196 "type": "rebuild", 00:20:08.196 "target": "spare", 00:20:08.196 "progress": { 00:20:08.196 "blocks": 24576, 00:20:08.196 "percent": 38 00:20:08.196 } 00:20:08.196 }, 00:20:08.196 "base_bdevs_list": [ 00:20:08.196 { 00:20:08.196 "name": "spare", 00:20:08.196 "uuid": "2a81b8de-6726-50ee-ae6c-366b5db68949", 00:20:08.196 "is_configured": true, 00:20:08.196 "data_offset": 2048, 00:20:08.196 "data_size": 63488 00:20:08.196 }, 00:20:08.196 { 00:20:08.196 "name": "BaseBdev2", 00:20:08.196 "uuid": "88176f5c-3c9f-5190-b6c5-f10f00301642", 00:20:08.196 "is_configured": true, 00:20:08.196 "data_offset": 2048, 00:20:08.196 "data_size": 63488 00:20:08.196 } 00:20:08.196 ] 00:20:08.196 }' 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:08.196 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@657 -- # local timeout=395 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.196 16:37:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.455 16:37:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:08.455 "name": "raid_bdev1", 00:20:08.455 "uuid": "a45f0502-ca00-46c6-a25c-d36cb66381b5", 00:20:08.455 "strip_size_kb": 0, 00:20:08.455 "state": "online", 00:20:08.455 "raid_level": "raid1", 00:20:08.455 "superblock": true, 00:20:08.455 "num_base_bdevs": 2, 00:20:08.455 "num_base_bdevs_discovered": 2, 00:20:08.455 "num_base_bdevs_operational": 2, 00:20:08.455 "process": { 00:20:08.455 "type": "rebuild", 00:20:08.455 "target": "spare", 00:20:08.455 "progress": { 00:20:08.455 "blocks": 30720, 00:20:08.455 "percent": 48 00:20:08.455 } 00:20:08.455 }, 00:20:08.455 "base_bdevs_list": [ 00:20:08.455 { 00:20:08.455 "name": "spare", 00:20:08.455 "uuid": "2a81b8de-6726-50ee-ae6c-366b5db68949", 00:20:08.455 "is_configured": true, 00:20:08.455 "data_offset": 2048, 00:20:08.455 "data_size": 63488 00:20:08.455 }, 00:20:08.455 { 00:20:08.455 "name": "BaseBdev2", 00:20:08.455 "uuid": "88176f5c-3c9f-5190-b6c5-f10f00301642", 00:20:08.455 "is_configured": true, 00:20:08.455 "data_offset": 2048, 00:20:08.455 "data_size": 63488 00:20:08.455 } 00:20:08.455 ] 00:20:08.455 }' 00:20:08.455 16:37:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:08.455 16:37:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.455 16:37:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:08.455 16:37:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.456 16:37:39 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:09.834 16:37:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:09.834 16:37:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.834 16:37:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:09.834 16:37:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:09.834 16:37:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:09.834 16:37:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:09.834 16:37:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.834 16:37:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.834 16:37:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:09.834 "name": "raid_bdev1", 00:20:09.834 "uuid": "a45f0502-ca00-46c6-a25c-d36cb66381b5", 00:20:09.834 "strip_size_kb": 0, 00:20:09.834 "state": "online", 00:20:09.834 "raid_level": "raid1", 00:20:09.834 "superblock": true, 00:20:09.834 "num_base_bdevs": 2, 00:20:09.834 "num_base_bdevs_discovered": 2, 00:20:09.834 "num_base_bdevs_operational": 2, 00:20:09.834 "process": { 00:20:09.834 "type": "rebuild", 00:20:09.834 "target": "spare", 00:20:09.834 "progress": { 00:20:09.834 "blocks": 59392, 00:20:09.834 "percent": 93 00:20:09.834 } 00:20:09.834 }, 00:20:09.834 "base_bdevs_list": [ 00:20:09.834 { 00:20:09.834 "name": "spare", 00:20:09.834 "uuid": "2a81b8de-6726-50ee-ae6c-366b5db68949", 00:20:09.834 "is_configured": true, 00:20:09.834 "data_offset": 2048, 00:20:09.834 "data_size": 63488 00:20:09.834 }, 00:20:09.834 { 00:20:09.834 "name": "BaseBdev2", 00:20:09.834 "uuid": "88176f5c-3c9f-5190-b6c5-f10f00301642", 00:20:09.834 "is_configured": true, 00:20:09.834 "data_offset": 2048, 00:20:09.834 "data_size": 63488 00:20:09.834 } 00:20:09.834 ] 00:20:09.834 }' 00:20:09.834 16:37:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:09.834 16:37:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.834 16:37:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:09.834 16:37:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.834 16:37:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:10.093 [2024-07-13 16:37:41.336459] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:10.093 [2024-07-13 16:37:41.336607] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:10.093 [2024-07-13 16:37:41.336792] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.029 16:37:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:11.029 16:37:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:11.029 16:37:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:11.029 16:37:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:11.029 16:37:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:11.029 16:37:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:11.029 16:37:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.029 16:37:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.289 16:37:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:11.289 "name": "raid_bdev1", 00:20:11.289 "uuid": "a45f0502-ca00-46c6-a25c-d36cb66381b5", 00:20:11.289 "strip_size_kb": 0, 00:20:11.289 "state": "online", 00:20:11.289 "raid_level": "raid1", 00:20:11.289 "superblock": true, 00:20:11.289 "num_base_bdevs": 2, 00:20:11.289 "num_base_bdevs_discovered": 2, 00:20:11.289 "num_base_bdevs_operational": 2, 00:20:11.289 "base_bdevs_list": [ 00:20:11.289 { 00:20:11.289 "name": "spare", 00:20:11.289 "uuid": "2a81b8de-6726-50ee-ae6c-366b5db68949", 00:20:11.289 "is_configured": true, 00:20:11.289 "data_offset": 2048, 00:20:11.289 "data_size": 63488 00:20:11.289 }, 00:20:11.289 { 00:20:11.289 "name": "BaseBdev2", 00:20:11.289 "uuid": "88176f5c-3c9f-5190-b6c5-f10f00301642", 00:20:11.289 "is_configured": true, 00:20:11.289 "data_offset": 2048, 00:20:11.289 "data_size": 63488 00:20:11.289 } 00:20:11.289 ] 00:20:11.289 }' 00:20:11.289 16:37:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:11.289 16:37:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:11.289 16:37:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:11.289 16:37:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:11.289 16:37:42 -- bdev/bdev_raid.sh@660 -- # break 00:20:11.289 16:37:42 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:11.289 16:37:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:11.289 16:37:42 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:11.289 16:37:42 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:11.289 16:37:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:11.289 16:37:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.289 16:37:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:11.546 "name": "raid_bdev1", 00:20:11.546 "uuid": "a45f0502-ca00-46c6-a25c-d36cb66381b5", 00:20:11.546 "strip_size_kb": 0, 00:20:11.546 "state": "online", 00:20:11.546 "raid_level": "raid1", 00:20:11.546 "superblock": true, 00:20:11.546 "num_base_bdevs": 2, 00:20:11.546 "num_base_bdevs_discovered": 2, 00:20:11.546 "num_base_bdevs_operational": 2, 00:20:11.546 "base_bdevs_list": [ 00:20:11.546 { 00:20:11.546 "name": "spare", 00:20:11.546 "uuid": "2a81b8de-6726-50ee-ae6c-366b5db68949", 00:20:11.546 "is_configured": true, 00:20:11.546 "data_offset": 2048, 00:20:11.546 "data_size": 63488 00:20:11.546 }, 00:20:11.546 { 00:20:11.546 "name": "BaseBdev2", 00:20:11.546 "uuid": "88176f5c-3c9f-5190-b6c5-f10f00301642", 00:20:11.546 "is_configured": true, 00:20:11.546 "data_offset": 2048, 00:20:11.546 "data_size": 63488 00:20:11.546 } 00:20:11.546 ] 00:20:11.546 }' 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.546 16:37:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.804 16:37:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:11.804 "name": "raid_bdev1", 00:20:11.804 "uuid": "a45f0502-ca00-46c6-a25c-d36cb66381b5", 00:20:11.804 "strip_size_kb": 0, 00:20:11.804 "state": "online", 00:20:11.804 "raid_level": "raid1", 00:20:11.804 "superblock": true, 00:20:11.804 "num_base_bdevs": 2, 00:20:11.804 "num_base_bdevs_discovered": 2, 00:20:11.804 "num_base_bdevs_operational": 2, 00:20:11.804 "base_bdevs_list": [ 00:20:11.804 { 00:20:11.804 "name": "spare", 00:20:11.804 "uuid": "2a81b8de-6726-50ee-ae6c-366b5db68949", 00:20:11.804 "is_configured": true, 00:20:11.804 "data_offset": 2048, 00:20:11.804 "data_size": 63488 00:20:11.804 }, 00:20:11.804 { 00:20:11.804 "name": "BaseBdev2", 00:20:11.804 "uuid": "88176f5c-3c9f-5190-b6c5-f10f00301642", 00:20:11.804 "is_configured": true, 00:20:11.804 "data_offset": 2048, 00:20:11.804 "data_size": 63488 00:20:11.804 } 00:20:11.804 ] 00:20:11.804 }' 00:20:11.804 16:37:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:11.804 16:37:43 -- common/autotest_common.sh@10 -- # set +x 00:20:12.395 16:37:43 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:12.653 [2024-07-13 16:37:44.080764] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:12.653 [2024-07-13 16:37:44.080825] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:12.653 [2024-07-13 16:37:44.080983] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.653 [2024-07-13 16:37:44.081090] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.653 [2024-07-13 16:37:44.081102] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:20:12.653 16:37:44 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.653 16:37:44 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:12.911 16:37:44 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:12.911 16:37:44 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:12.911 16:37:44 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:12.911 16:37:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:12.911 16:37:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:12.911 16:37:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:12.911 16:37:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:12.911 16:37:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:12.911 16:37:44 -- bdev/nbd_common.sh@12 -- # local i 00:20:12.911 16:37:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:12.911 16:37:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:12.911 16:37:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:13.170 /dev/nbd0 00:20:13.170 16:37:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:13.170 16:37:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:13.170 16:37:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:13.170 16:37:44 -- common/autotest_common.sh@857 -- # local i 00:20:13.170 16:37:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:13.170 16:37:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:13.170 16:37:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:13.170 16:37:44 -- common/autotest_common.sh@861 -- # break 00:20:13.170 16:37:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:13.170 16:37:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:13.170 16:37:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:13.170 1+0 records in 00:20:13.170 1+0 records out 00:20:13.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684454 s, 6.0 MB/s 00:20:13.170 16:37:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.170 16:37:44 -- common/autotest_common.sh@874 -- # size=4096 00:20:13.170 16:37:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.170 16:37:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:13.170 16:37:44 -- common/autotest_common.sh@877 -- # return 0 00:20:13.170 16:37:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:13.170 16:37:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:13.170 16:37:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:13.429 /dev/nbd1 00:20:13.429 16:37:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:13.689 16:37:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:13.689 16:37:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:13.689 16:37:44 -- common/autotest_common.sh@857 -- # local i 00:20:13.689 16:37:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:13.689 16:37:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:13.689 16:37:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:13.689 16:37:44 -- common/autotest_common.sh@861 -- # break 00:20:13.689 16:37:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:13.689 16:37:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:13.689 16:37:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:13.689 1+0 records in 00:20:13.689 1+0 records out 00:20:13.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527498 s, 7.8 MB/s 00:20:13.689 16:37:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.689 16:37:44 -- common/autotest_common.sh@874 -- # size=4096 00:20:13.689 16:37:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.689 16:37:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:13.689 16:37:44 -- common/autotest_common.sh@877 -- # return 0 00:20:13.689 16:37:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:13.689 16:37:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:13.689 16:37:44 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:13.689 16:37:45 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:13.689 16:37:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:13.689 16:37:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:13.689 16:37:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:13.689 16:37:45 -- bdev/nbd_common.sh@51 -- # local i 00:20:13.689 16:37:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:13.689 16:37:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:13.948 16:37:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:13.948 16:37:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:13.948 16:37:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:13.948 16:37:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:13.948 16:37:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:13.948 16:37:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:13.948 16:37:45 -- bdev/nbd_common.sh@41 -- # break 00:20:13.948 16:37:45 -- bdev/nbd_common.sh@45 -- # return 0 00:20:13.948 16:37:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:13.948 16:37:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:14.207 16:37:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:14.207 16:37:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:14.207 16:37:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:14.207 16:37:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:14.207 16:37:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:14.207 16:37:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:14.207 16:37:45 -- bdev/nbd_common.sh@41 -- # break 00:20:14.207 16:37:45 -- bdev/nbd_common.sh@45 -- # return 0 00:20:14.207 16:37:45 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:14.207 16:37:45 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:14.207 16:37:45 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:14.207 16:37:45 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:14.463 16:37:45 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:14.721 [2024-07-13 16:37:45.944149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:14.721 [2024-07-13 16:37:45.944303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.721 [2024-07-13 16:37:45.944367] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:14.721 [2024-07-13 16:37:45.944403] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.721 [2024-07-13 16:37:45.947533] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.721 [2024-07-13 16:37:45.947633] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:14.721 [2024-07-13 16:37:45.947767] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:14.721 [2024-07-13 16:37:45.947833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:14.721 BaseBdev1 00:20:14.721 16:37:45 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:14.721 16:37:45 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:14.721 16:37:45 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:14.980 16:37:46 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:15.239 [2024-07-13 16:37:46.512709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:15.239 [2024-07-13 16:37:46.512876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.239 [2024-07-13 16:37:46.512957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:15.239 [2024-07-13 16:37:46.513009] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.239 [2024-07-13 16:37:46.513542] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.239 [2024-07-13 16:37:46.513616] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:15.239 [2024-07-13 16:37:46.513730] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:15.239 [2024-07-13 16:37:46.513743] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:15.239 [2024-07-13 16:37:46.513752] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:15.239 [2024-07-13 16:37:46.513787] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:20:15.239 [2024-07-13 16:37:46.513851] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:15.239 BaseBdev2 00:20:15.239 16:37:46 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:15.498 16:37:46 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:15.498 [2024-07-13 16:37:46.940707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:15.498 [2024-07-13 16:37:46.940866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.498 [2024-07-13 16:37:46.940936] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:15.498 [2024-07-13 16:37:46.940968] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.499 [2024-07-13 16:37:46.941554] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.499 [2024-07-13 16:37:46.941619] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:15.499 [2024-07-13 16:37:46.941734] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:15.499 [2024-07-13 16:37:46.941776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:15.499 spare 00:20:15.499 16:37:46 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:15.499 16:37:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:15.499 16:37:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:15.499 16:37:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:15.499 16:37:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:15.499 16:37:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:15.499 16:37:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:15.758 16:37:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:15.758 16:37:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:15.758 16:37:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:15.758 16:37:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.758 16:37:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.758 [2024-07-13 16:37:47.041897] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:20:15.758 [2024-07-13 16:37:47.041945] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:15.758 [2024-07-13 16:37:47.042195] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:20:15.758 [2024-07-13 16:37:47.042670] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:20:15.758 [2024-07-13 16:37:47.042691] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:20:15.758 [2024-07-13 16:37:47.042826] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.758 16:37:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:15.758 "name": "raid_bdev1", 00:20:15.758 "uuid": "a45f0502-ca00-46c6-a25c-d36cb66381b5", 00:20:15.758 "strip_size_kb": 0, 00:20:15.758 "state": "online", 00:20:15.758 "raid_level": "raid1", 00:20:15.758 "superblock": true, 00:20:15.758 "num_base_bdevs": 2, 00:20:15.758 "num_base_bdevs_discovered": 2, 00:20:15.758 "num_base_bdevs_operational": 2, 00:20:15.758 "base_bdevs_list": [ 00:20:15.758 { 00:20:15.758 "name": "spare", 00:20:15.758 "uuid": "2a81b8de-6726-50ee-ae6c-366b5db68949", 00:20:15.758 "is_configured": true, 00:20:15.758 "data_offset": 2048, 00:20:15.758 "data_size": 63488 00:20:15.758 }, 00:20:15.758 { 00:20:15.758 "name": "BaseBdev2", 00:20:15.758 "uuid": "88176f5c-3c9f-5190-b6c5-f10f00301642", 00:20:15.758 "is_configured": true, 00:20:15.758 "data_offset": 2048, 00:20:15.758 "data_size": 63488 00:20:15.758 } 00:20:15.758 ] 00:20:15.758 }' 00:20:15.758 16:37:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:15.758 16:37:47 -- common/autotest_common.sh@10 -- # set +x 00:20:16.326 16:37:47 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:16.326 16:37:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.326 16:37:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:16.326 16:37:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:16.326 16:37:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.326 16:37:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.326 16:37:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.584 16:37:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:16.584 "name": "raid_bdev1", 00:20:16.584 "uuid": "a45f0502-ca00-46c6-a25c-d36cb66381b5", 00:20:16.584 "strip_size_kb": 0, 00:20:16.584 "state": "online", 00:20:16.584 "raid_level": "raid1", 00:20:16.584 "superblock": true, 00:20:16.584 "num_base_bdevs": 2, 00:20:16.584 "num_base_bdevs_discovered": 2, 00:20:16.584 "num_base_bdevs_operational": 2, 00:20:16.584 "base_bdevs_list": [ 00:20:16.584 { 00:20:16.584 "name": "spare", 00:20:16.584 "uuid": "2a81b8de-6726-50ee-ae6c-366b5db68949", 00:20:16.584 "is_configured": true, 00:20:16.584 "data_offset": 2048, 00:20:16.584 "data_size": 63488 00:20:16.584 }, 00:20:16.584 { 00:20:16.584 "name": "BaseBdev2", 00:20:16.584 "uuid": "88176f5c-3c9f-5190-b6c5-f10f00301642", 00:20:16.584 "is_configured": true, 00:20:16.584 "data_offset": 2048, 00:20:16.584 "data_size": 63488 00:20:16.584 } 00:20:16.584 ] 00:20:16.584 }' 00:20:16.584 16:37:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:16.844 16:37:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:16.844 16:37:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:16.844 16:37:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:16.844 16:37:48 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:16.844 16:37:48 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.102 16:37:48 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.102 16:37:48 -- bdev/bdev_raid.sh@709 -- # killprocess 133505 00:20:17.102 16:37:48 -- common/autotest_common.sh@926 -- # '[' -z 133505 ']' 00:20:17.102 16:37:48 -- common/autotest_common.sh@930 -- # kill -0 133505 00:20:17.102 16:37:48 -- common/autotest_common.sh@931 -- # uname 00:20:17.102 16:37:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:17.102 16:37:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133505 00:20:17.102 killing process with pid 133505 00:20:17.102 Received shutdown signal, test time was about 60.000000 seconds 00:20:17.102 00:20:17.102 Latency(us) 00:20:17.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.102 =================================================================================================================== 00:20:17.102 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:17.102 16:37:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:17.102 16:37:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:17.102 16:37:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133505' 00:20:17.102 16:37:48 -- common/autotest_common.sh@945 -- # kill 133505 00:20:17.102 16:37:48 -- common/autotest_common.sh@950 -- # wait 133505 00:20:17.102 [2024-07-13 16:37:48.438907] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:17.102 [2024-07-13 16:37:48.439046] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.102 [2024-07-13 16:37:48.439126] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.102 [2024-07-13 16:37:48.439149] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:20:17.102 [2024-07-13 16:37:48.498838] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:17.669 ************************************ 00:20:17.669 END TEST raid_rebuild_test_sb 00:20:17.669 ************************************ 00:20:17.669 16:37:48 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:17.669 00:20:17.669 real 0m24.969s 00:20:17.669 user 0m35.589s 00:20:17.669 sys 0m5.234s 00:20:17.669 16:37:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.669 16:37:48 -- common/autotest_common.sh@10 -- # set +x 00:20:17.669 16:37:48 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:20:17.669 16:37:48 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:17.669 16:37:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:17.669 16:37:48 -- common/autotest_common.sh@10 -- # set +x 00:20:17.669 ************************************ 00:20:17.669 START TEST raid_rebuild_test_io 00:20:17.669 ************************************ 00:20:17.669 16:37:49 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:17.669 16:37:49 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:17.670 16:37:49 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:17.670 16:37:49 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:17.670 16:37:49 -- bdev/bdev_raid.sh@544 -- # raid_pid=134127 00:20:17.670 16:37:49 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134127 /var/tmp/spdk-raid.sock 00:20:17.670 16:37:49 -- common/autotest_common.sh@819 -- # '[' -z 134127 ']' 00:20:17.670 16:37:49 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:17.670 16:37:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:17.670 16:37:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:17.670 16:37:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:17.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:17.670 16:37:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:17.670 16:37:49 -- common/autotest_common.sh@10 -- # set +x 00:20:17.670 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:17.670 Zero copy mechanism will not be used. 00:20:17.670 [2024-07-13 16:37:49.098751] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:17.670 [2024-07-13 16:37:49.099040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134127 ] 00:20:17.929 [2024-07-13 16:37:49.256779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.929 [2024-07-13 16:37:49.342028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.188 [2024-07-13 16:37:49.424183] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:18.756 16:37:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:18.756 16:37:50 -- common/autotest_common.sh@852 -- # return 0 00:20:18.756 16:37:50 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:18.756 16:37:50 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:18.756 16:37:50 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:19.015 BaseBdev1 00:20:19.015 16:37:50 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:19.015 16:37:50 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:19.015 16:37:50 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:19.274 BaseBdev2 00:20:19.274 16:37:50 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:19.533 spare_malloc 00:20:19.533 16:37:50 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:19.791 spare_delay 00:20:19.792 16:37:51 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:20.050 [2024-07-13 16:37:51.313800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:20.050 [2024-07-13 16:37:51.313990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.050 [2024-07-13 16:37:51.314051] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:20.050 [2024-07-13 16:37:51.314115] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.050 [2024-07-13 16:37:51.317414] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.050 [2024-07-13 16:37:51.317511] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:20.050 spare 00:20:20.050 16:37:51 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:20.310 [2024-07-13 16:37:51.530004] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:20.310 [2024-07-13 16:37:51.532638] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:20.310 [2024-07-13 16:37:51.532753] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:20:20.310 [2024-07-13 16:37:51.532764] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:20.310 [2024-07-13 16:37:51.532998] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:20:20.310 [2024-07-13 16:37:51.533500] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:20:20.310 [2024-07-13 16:37:51.533522] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:20:20.310 [2024-07-13 16:37:51.533741] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:20.310 "name": "raid_bdev1", 00:20:20.310 "uuid": "a79b615d-8f4e-4d45-8f66-85908feeef11", 00:20:20.310 "strip_size_kb": 0, 00:20:20.310 "state": "online", 00:20:20.310 "raid_level": "raid1", 00:20:20.310 "superblock": false, 00:20:20.310 "num_base_bdevs": 2, 00:20:20.310 "num_base_bdevs_discovered": 2, 00:20:20.310 "num_base_bdevs_operational": 2, 00:20:20.310 "base_bdevs_list": [ 00:20:20.310 { 00:20:20.310 "name": "BaseBdev1", 00:20:20.310 "uuid": "f6269355-5e2f-4af0-afbe-b09ff50e9681", 00:20:20.310 "is_configured": true, 00:20:20.310 "data_offset": 0, 00:20:20.310 "data_size": 65536 00:20:20.310 }, 00:20:20.310 { 00:20:20.310 "name": "BaseBdev2", 00:20:20.310 "uuid": "c6465103-a634-4cd5-af7a-d7a4c486f222", 00:20:20.310 "is_configured": true, 00:20:20.310 "data_offset": 0, 00:20:20.310 "data_size": 65536 00:20:20.310 } 00:20:20.310 ] 00:20:20.310 }' 00:20:20.310 16:37:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:20.310 16:37:51 -- common/autotest_common.sh@10 -- # set +x 00:20:20.914 16:37:52 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:20.914 16:37:52 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:21.172 [2024-07-13 16:37:52.590390] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:21.172 16:37:52 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:21.172 16:37:52 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.172 16:37:52 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:21.740 16:37:52 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:21.740 16:37:52 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:21.740 16:37:52 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:21.740 16:37:52 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:21.740 [2024-07-13 16:37:52.994125] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:20:21.740 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:21.740 Zero copy mechanism will not be used. 00:20:21.740 Running I/O for 60 seconds... 00:20:21.740 [2024-07-13 16:37:53.151531] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:21.740 [2024-07-13 16:37:53.157759] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000021f0 00:20:21.740 16:37:53 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:21.740 16:37:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:21.740 16:37:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:21.740 16:37:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:21.740 16:37:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:21.740 16:37:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:21.740 16:37:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:21.740 16:37:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:21.740 16:37:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:21.740 16:37:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:21.740 16:37:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.740 16:37:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.306 16:37:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:22.306 "name": "raid_bdev1", 00:20:22.306 "uuid": "a79b615d-8f4e-4d45-8f66-85908feeef11", 00:20:22.306 "strip_size_kb": 0, 00:20:22.306 "state": "online", 00:20:22.306 "raid_level": "raid1", 00:20:22.306 "superblock": false, 00:20:22.306 "num_base_bdevs": 2, 00:20:22.306 "num_base_bdevs_discovered": 1, 00:20:22.306 "num_base_bdevs_operational": 1, 00:20:22.306 "base_bdevs_list": [ 00:20:22.306 { 00:20:22.306 "name": null, 00:20:22.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.306 "is_configured": false, 00:20:22.306 "data_offset": 0, 00:20:22.306 "data_size": 65536 00:20:22.306 }, 00:20:22.306 { 00:20:22.306 "name": "BaseBdev2", 00:20:22.306 "uuid": "c6465103-a634-4cd5-af7a-d7a4c486f222", 00:20:22.306 "is_configured": true, 00:20:22.306 "data_offset": 0, 00:20:22.306 "data_size": 65536 00:20:22.306 } 00:20:22.306 ] 00:20:22.306 }' 00:20:22.306 16:37:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:22.306 16:37:53 -- common/autotest_common.sh@10 -- # set +x 00:20:22.872 16:37:54 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:23.131 [2024-07-13 16:37:54.356123] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:23.131 [2024-07-13 16:37:54.356221] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:23.131 16:37:54 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:23.131 [2024-07-13 16:37:54.412381] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:20:23.131 [2024-07-13 16:37:54.415218] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:23.131 [2024-07-13 16:37:54.532666] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:23.131 [2024-07-13 16:37:54.533454] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:23.390 [2024-07-13 16:37:54.758529] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:23.390 [2024-07-13 16:37:54.758956] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:23.648 [2024-07-13 16:37:55.009420] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:23.648 [2024-07-13 16:37:55.113801] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:23.649 [2024-07-13 16:37:55.114214] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:23.907 [2024-07-13 16:37:55.335124] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:24.167 16:37:55 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:24.167 16:37:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:24.167 16:37:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:24.167 16:37:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:24.167 16:37:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:24.167 16:37:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.167 16:37:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.167 [2024-07-13 16:37:55.544829] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:24.167 [2024-07-13 16:37:55.545301] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:24.426 16:37:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:24.426 "name": "raid_bdev1", 00:20:24.426 "uuid": "a79b615d-8f4e-4d45-8f66-85908feeef11", 00:20:24.426 "strip_size_kb": 0, 00:20:24.426 "state": "online", 00:20:24.426 "raid_level": "raid1", 00:20:24.426 "superblock": false, 00:20:24.426 "num_base_bdevs": 2, 00:20:24.426 "num_base_bdevs_discovered": 2, 00:20:24.426 "num_base_bdevs_operational": 2, 00:20:24.426 "process": { 00:20:24.426 "type": "rebuild", 00:20:24.426 "target": "spare", 00:20:24.426 "progress": { 00:20:24.426 "blocks": 16384, 00:20:24.426 "percent": 25 00:20:24.426 } 00:20:24.426 }, 00:20:24.426 "base_bdevs_list": [ 00:20:24.426 { 00:20:24.426 "name": "spare", 00:20:24.426 "uuid": "91ce109e-4319-5fab-b51f-8d909b109012", 00:20:24.426 "is_configured": true, 00:20:24.426 "data_offset": 0, 00:20:24.426 "data_size": 65536 00:20:24.426 }, 00:20:24.426 { 00:20:24.426 "name": "BaseBdev2", 00:20:24.426 "uuid": "c6465103-a634-4cd5-af7a-d7a4c486f222", 00:20:24.426 "is_configured": true, 00:20:24.426 "data_offset": 0, 00:20:24.426 "data_size": 65536 00:20:24.426 } 00:20:24.426 ] 00:20:24.426 }' 00:20:24.426 16:37:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:24.426 16:37:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:24.427 16:37:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:24.427 16:37:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:24.427 16:37:55 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:24.686 [2024-07-13 16:37:56.017456] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:24.686 [2024-07-13 16:37:56.033390] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:24.686 [2024-07-13 16:37:56.033743] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:24.686 [2024-07-13 16:37:56.047662] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:24.686 [2024-07-13 16:37:56.051444] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.686 [2024-07-13 16:37:56.068944] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000021f0 00:20:24.686 16:37:56 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.686 16:37:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:24.686 16:37:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:24.686 16:37:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:24.686 16:37:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:24.686 16:37:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:24.686 16:37:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:24.686 16:37:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:24.686 16:37:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:24.686 16:37:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:24.686 16:37:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.686 16:37:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.945 16:37:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:24.945 "name": "raid_bdev1", 00:20:24.945 "uuid": "a79b615d-8f4e-4d45-8f66-85908feeef11", 00:20:24.945 "strip_size_kb": 0, 00:20:24.945 "state": "online", 00:20:24.945 "raid_level": "raid1", 00:20:24.945 "superblock": false, 00:20:24.945 "num_base_bdevs": 2, 00:20:24.945 "num_base_bdevs_discovered": 1, 00:20:24.945 "num_base_bdevs_operational": 1, 00:20:24.945 "base_bdevs_list": [ 00:20:24.945 { 00:20:24.945 "name": null, 00:20:24.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.945 "is_configured": false, 00:20:24.945 "data_offset": 0, 00:20:24.945 "data_size": 65536 00:20:24.945 }, 00:20:24.945 { 00:20:24.945 "name": "BaseBdev2", 00:20:24.945 "uuid": "c6465103-a634-4cd5-af7a-d7a4c486f222", 00:20:24.945 "is_configured": true, 00:20:24.945 "data_offset": 0, 00:20:24.945 "data_size": 65536 00:20:24.945 } 00:20:24.945 ] 00:20:24.945 }' 00:20:24.945 16:37:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:24.945 16:37:56 -- common/autotest_common.sh@10 -- # set +x 00:20:25.514 16:37:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:25.514 16:37:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:25.514 16:37:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:25.514 16:37:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:25.514 16:37:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:25.514 16:37:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.514 16:37:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.082 16:37:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:26.082 "name": "raid_bdev1", 00:20:26.082 "uuid": "a79b615d-8f4e-4d45-8f66-85908feeef11", 00:20:26.082 "strip_size_kb": 0, 00:20:26.082 "state": "online", 00:20:26.082 "raid_level": "raid1", 00:20:26.082 "superblock": false, 00:20:26.082 "num_base_bdevs": 2, 00:20:26.082 "num_base_bdevs_discovered": 1, 00:20:26.082 "num_base_bdevs_operational": 1, 00:20:26.082 "base_bdevs_list": [ 00:20:26.082 { 00:20:26.082 "name": null, 00:20:26.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.082 "is_configured": false, 00:20:26.082 "data_offset": 0, 00:20:26.082 "data_size": 65536 00:20:26.082 }, 00:20:26.082 { 00:20:26.082 "name": "BaseBdev2", 00:20:26.082 "uuid": "c6465103-a634-4cd5-af7a-d7a4c486f222", 00:20:26.082 "is_configured": true, 00:20:26.082 "data_offset": 0, 00:20:26.082 "data_size": 65536 00:20:26.082 } 00:20:26.082 ] 00:20:26.082 }' 00:20:26.082 16:37:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:26.082 16:37:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:26.082 16:37:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:26.082 16:37:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:26.082 16:37:57 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:26.341 [2024-07-13 16:37:57.649666] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:26.341 [2024-07-13 16:37:57.649751] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:26.341 [2024-07-13 16:37:57.698164] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:26.341 [2024-07-13 16:37:57.700882] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:26.341 16:37:57 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:26.600 [2024-07-13 16:37:57.818017] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:26.600 [2024-07-13 16:37:57.818765] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:26.600 [2024-07-13 16:37:57.928146] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:26.600 [2024-07-13 16:37:57.928597] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:26.859 [2024-07-13 16:37:58.266725] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:27.117 [2024-07-13 16:37:58.486869] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:27.117 [2024-07-13 16:37:58.487297] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:27.376 16:37:58 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.376 16:37:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:27.376 16:37:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:27.376 16:37:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:27.376 16:37:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:27.376 16:37:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.376 16:37:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.376 [2024-07-13 16:37:58.843659] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:27.376 [2024-07-13 16:37:58.844350] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:27.635 16:37:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:27.635 "name": "raid_bdev1", 00:20:27.635 "uuid": "a79b615d-8f4e-4d45-8f66-85908feeef11", 00:20:27.635 "strip_size_kb": 0, 00:20:27.635 "state": "online", 00:20:27.635 "raid_level": "raid1", 00:20:27.635 "superblock": false, 00:20:27.635 "num_base_bdevs": 2, 00:20:27.635 "num_base_bdevs_discovered": 2, 00:20:27.635 "num_base_bdevs_operational": 2, 00:20:27.635 "process": { 00:20:27.635 "type": "rebuild", 00:20:27.635 "target": "spare", 00:20:27.635 "progress": { 00:20:27.635 "blocks": 14336, 00:20:27.635 "percent": 21 00:20:27.635 } 00:20:27.635 }, 00:20:27.635 "base_bdevs_list": [ 00:20:27.635 { 00:20:27.635 "name": "spare", 00:20:27.635 "uuid": "91ce109e-4319-5fab-b51f-8d909b109012", 00:20:27.635 "is_configured": true, 00:20:27.635 "data_offset": 0, 00:20:27.635 "data_size": 65536 00:20:27.635 }, 00:20:27.635 { 00:20:27.635 "name": "BaseBdev2", 00:20:27.635 "uuid": "c6465103-a634-4cd5-af7a-d7a4c486f222", 00:20:27.635 "is_configured": true, 00:20:27.635 "data_offset": 0, 00:20:27.635 "data_size": 65536 00:20:27.635 } 00:20:27.635 ] 00:20:27.635 }' 00:20:27.635 16:37:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:27.635 [2024-07-13 16:37:59.062601] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@657 -- # local timeout=415 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.635 16:37:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.894 16:37:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:27.894 "name": "raid_bdev1", 00:20:27.894 "uuid": "a79b615d-8f4e-4d45-8f66-85908feeef11", 00:20:27.894 "strip_size_kb": 0, 00:20:27.894 "state": "online", 00:20:27.894 "raid_level": "raid1", 00:20:27.894 "superblock": false, 00:20:27.895 "num_base_bdevs": 2, 00:20:27.895 "num_base_bdevs_discovered": 2, 00:20:27.895 "num_base_bdevs_operational": 2, 00:20:27.895 "process": { 00:20:27.895 "type": "rebuild", 00:20:27.895 "target": "spare", 00:20:27.895 "progress": { 00:20:27.895 "blocks": 18432, 00:20:27.895 "percent": 28 00:20:27.895 } 00:20:27.895 }, 00:20:27.895 "base_bdevs_list": [ 00:20:27.895 { 00:20:27.895 "name": "spare", 00:20:27.895 "uuid": "91ce109e-4319-5fab-b51f-8d909b109012", 00:20:27.895 "is_configured": true, 00:20:27.895 "data_offset": 0, 00:20:27.895 "data_size": 65536 00:20:27.895 }, 00:20:27.895 { 00:20:27.895 "name": "BaseBdev2", 00:20:27.895 "uuid": "c6465103-a634-4cd5-af7a-d7a4c486f222", 00:20:27.895 "is_configured": true, 00:20:27.895 "data_offset": 0, 00:20:27.895 "data_size": 65536 00:20:27.895 } 00:20:27.895 ] 00:20:27.895 }' 00:20:28.154 16:37:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:28.154 16:37:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.154 16:37:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:28.154 16:37:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.154 16:37:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:28.154 [2024-07-13 16:37:59.522847] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:28.413 [2024-07-13 16:37:59.756939] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:28.413 [2024-07-13 16:37:59.880352] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:28.679 [2024-07-13 16:38:00.121224] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:29.246 16:38:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:29.246 16:38:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.246 16:38:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:29.246 16:38:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:29.246 16:38:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:29.246 16:38:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:29.246 16:38:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.246 16:38:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.246 [2024-07-13 16:38:00.540886] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:29.504 16:38:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:29.504 "name": "raid_bdev1", 00:20:29.504 "uuid": "a79b615d-8f4e-4d45-8f66-85908feeef11", 00:20:29.504 "strip_size_kb": 0, 00:20:29.504 "state": "online", 00:20:29.504 "raid_level": "raid1", 00:20:29.504 "superblock": false, 00:20:29.504 "num_base_bdevs": 2, 00:20:29.504 "num_base_bdevs_discovered": 2, 00:20:29.504 "num_base_bdevs_operational": 2, 00:20:29.504 "process": { 00:20:29.504 "type": "rebuild", 00:20:29.504 "target": "spare", 00:20:29.504 "progress": { 00:20:29.504 "blocks": 38912, 00:20:29.504 "percent": 59 00:20:29.504 } 00:20:29.504 }, 00:20:29.504 "base_bdevs_list": [ 00:20:29.504 { 00:20:29.504 "name": "spare", 00:20:29.504 "uuid": "91ce109e-4319-5fab-b51f-8d909b109012", 00:20:29.504 "is_configured": true, 00:20:29.504 "data_offset": 0, 00:20:29.504 "data_size": 65536 00:20:29.504 }, 00:20:29.504 { 00:20:29.504 "name": "BaseBdev2", 00:20:29.504 "uuid": "c6465103-a634-4cd5-af7a-d7a4c486f222", 00:20:29.504 "is_configured": true, 00:20:29.504 "data_offset": 0, 00:20:29.504 "data_size": 65536 00:20:29.504 } 00:20:29.504 ] 00:20:29.504 }' 00:20:29.504 [2024-07-13 16:38:00.757091] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:29.504 16:38:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:29.504 16:38:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.504 16:38:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:29.504 16:38:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:29.504 16:38:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:30.437 [2024-07-13 16:38:01.842753] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:30.437 16:38:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:30.437 16:38:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.437 16:38:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:30.437 16:38:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:30.437 16:38:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:30.437 16:38:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:30.437 16:38:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.437 16:38:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.695 16:38:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:30.695 "name": "raid_bdev1", 00:20:30.695 "uuid": "a79b615d-8f4e-4d45-8f66-85908feeef11", 00:20:30.695 "strip_size_kb": 0, 00:20:30.695 "state": "online", 00:20:30.695 "raid_level": "raid1", 00:20:30.695 "superblock": false, 00:20:30.695 "num_base_bdevs": 2, 00:20:30.695 "num_base_bdevs_discovered": 2, 00:20:30.695 "num_base_bdevs_operational": 2, 00:20:30.695 "process": { 00:20:30.695 "type": "rebuild", 00:20:30.695 "target": "spare", 00:20:30.695 "progress": { 00:20:30.695 "blocks": 63488, 00:20:30.695 "percent": 96 00:20:30.695 } 00:20:30.695 }, 00:20:30.695 "base_bdevs_list": [ 00:20:30.695 { 00:20:30.695 "name": "spare", 00:20:30.695 "uuid": "91ce109e-4319-5fab-b51f-8d909b109012", 00:20:30.695 "is_configured": true, 00:20:30.695 "data_offset": 0, 00:20:30.695 "data_size": 65536 00:20:30.695 }, 00:20:30.695 { 00:20:30.695 "name": "BaseBdev2", 00:20:30.695 "uuid": "c6465103-a634-4cd5-af7a-d7a4c486f222", 00:20:30.695 "is_configured": true, 00:20:30.695 "data_offset": 0, 00:20:30.695 "data_size": 65536 00:20:30.695 } 00:20:30.695 ] 00:20:30.695 }' 00:20:30.695 16:38:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:30.954 16:38:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.954 16:38:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:30.954 [2024-07-13 16:38:02.182587] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:30.954 16:38:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.954 16:38:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:30.954 [2024-07-13 16:38:02.288548] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:30.954 [2024-07-13 16:38:02.291454] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.915 16:38:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:31.915 16:38:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.915 16:38:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:31.915 16:38:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:31.915 16:38:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:31.915 16:38:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:31.915 16:38:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.915 16:38:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.173 16:38:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:32.173 "name": "raid_bdev1", 00:20:32.173 "uuid": "a79b615d-8f4e-4d45-8f66-85908feeef11", 00:20:32.173 "strip_size_kb": 0, 00:20:32.173 "state": "online", 00:20:32.173 "raid_level": "raid1", 00:20:32.173 "superblock": false, 00:20:32.173 "num_base_bdevs": 2, 00:20:32.173 "num_base_bdevs_discovered": 2, 00:20:32.173 "num_base_bdevs_operational": 2, 00:20:32.173 "base_bdevs_list": [ 00:20:32.173 { 00:20:32.173 "name": "spare", 00:20:32.173 "uuid": "91ce109e-4319-5fab-b51f-8d909b109012", 00:20:32.173 "is_configured": true, 00:20:32.173 "data_offset": 0, 00:20:32.173 "data_size": 65536 00:20:32.173 }, 00:20:32.173 { 00:20:32.173 "name": "BaseBdev2", 00:20:32.173 "uuid": "c6465103-a634-4cd5-af7a-d7a4c486f222", 00:20:32.173 "is_configured": true, 00:20:32.173 "data_offset": 0, 00:20:32.173 "data_size": 65536 00:20:32.173 } 00:20:32.173 ] 00:20:32.173 }' 00:20:32.173 16:38:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:32.173 16:38:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:32.173 16:38:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:32.173 16:38:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:32.173 16:38:03 -- bdev/bdev_raid.sh@660 -- # break 00:20:32.173 16:38:03 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.173 16:38:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:32.173 16:38:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:32.173 16:38:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:32.173 16:38:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:32.173 16:38:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.173 16:38:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.431 16:38:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:32.431 "name": "raid_bdev1", 00:20:32.431 "uuid": "a79b615d-8f4e-4d45-8f66-85908feeef11", 00:20:32.431 "strip_size_kb": 0, 00:20:32.431 "state": "online", 00:20:32.431 "raid_level": "raid1", 00:20:32.431 "superblock": false, 00:20:32.431 "num_base_bdevs": 2, 00:20:32.431 "num_base_bdevs_discovered": 2, 00:20:32.431 "num_base_bdevs_operational": 2, 00:20:32.431 "base_bdevs_list": [ 00:20:32.431 { 00:20:32.431 "name": "spare", 00:20:32.431 "uuid": "91ce109e-4319-5fab-b51f-8d909b109012", 00:20:32.431 "is_configured": true, 00:20:32.431 "data_offset": 0, 00:20:32.431 "data_size": 65536 00:20:32.431 }, 00:20:32.431 { 00:20:32.431 "name": "BaseBdev2", 00:20:32.431 "uuid": "c6465103-a634-4cd5-af7a-d7a4c486f222", 00:20:32.431 "is_configured": true, 00:20:32.431 "data_offset": 0, 00:20:32.431 "data_size": 65536 00:20:32.431 } 00:20:32.431 ] 00:20:32.431 }' 00:20:32.431 16:38:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:32.431 16:38:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:32.431 16:38:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:32.688 16:38:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:32.688 16:38:03 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:32.688 16:38:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:32.688 16:38:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:32.688 16:38:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:32.688 16:38:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:32.688 16:38:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:32.688 16:38:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:32.688 16:38:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:32.688 16:38:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:32.688 16:38:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:32.688 16:38:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.688 16:38:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.946 16:38:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:32.946 "name": "raid_bdev1", 00:20:32.946 "uuid": "a79b615d-8f4e-4d45-8f66-85908feeef11", 00:20:32.946 "strip_size_kb": 0, 00:20:32.946 "state": "online", 00:20:32.946 "raid_level": "raid1", 00:20:32.946 "superblock": false, 00:20:32.946 "num_base_bdevs": 2, 00:20:32.946 "num_base_bdevs_discovered": 2, 00:20:32.946 "num_base_bdevs_operational": 2, 00:20:32.946 "base_bdevs_list": [ 00:20:32.946 { 00:20:32.946 "name": "spare", 00:20:32.946 "uuid": "91ce109e-4319-5fab-b51f-8d909b109012", 00:20:32.946 "is_configured": true, 00:20:32.946 "data_offset": 0, 00:20:32.946 "data_size": 65536 00:20:32.946 }, 00:20:32.946 { 00:20:32.946 "name": "BaseBdev2", 00:20:32.946 "uuid": "c6465103-a634-4cd5-af7a-d7a4c486f222", 00:20:32.946 "is_configured": true, 00:20:32.946 "data_offset": 0, 00:20:32.946 "data_size": 65536 00:20:32.946 } 00:20:32.946 ] 00:20:32.947 }' 00:20:32.947 16:38:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:32.947 16:38:04 -- common/autotest_common.sh@10 -- # set +x 00:20:33.513 16:38:04 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:33.770 [2024-07-13 16:38:05.013422] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:33.771 [2024-07-13 16:38:05.013489] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.771 00:20:33.771 Latency(us) 00:20:33.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.771 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:33.771 raid_bdev1 : 12.10 96.99 290.98 0.00 0.00 14744.76 399.85 116841.33 00:20:33.771 =================================================================================================================== 00:20:33.771 Total : 96.99 290.98 0.00 0.00 14744.76 399.85 116841.33 00:20:33.771 [2024-07-13 16:38:05.106889] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.771 [2024-07-13 16:38:05.106973] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.771 [2024-07-13 16:38:05.107078] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.771 [2024-07-13 16:38:05.107092] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:20:33.771 0 00:20:33.771 16:38:05 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.771 16:38:05 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:34.030 16:38:05 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:34.030 16:38:05 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:34.030 16:38:05 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:34.030 16:38:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:34.030 16:38:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:34.030 16:38:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:34.030 16:38:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:34.030 16:38:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:34.030 16:38:05 -- bdev/nbd_common.sh@12 -- # local i 00:20:34.030 16:38:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:34.030 16:38:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:34.030 16:38:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:34.288 /dev/nbd0 00:20:34.288 16:38:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:34.288 16:38:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:34.288 16:38:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:34.288 16:38:05 -- common/autotest_common.sh@857 -- # local i 00:20:34.288 16:38:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:34.288 16:38:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:34.288 16:38:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:34.288 16:38:05 -- common/autotest_common.sh@861 -- # break 00:20:34.288 16:38:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:34.288 16:38:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:34.288 16:38:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.288 1+0 records in 00:20:34.288 1+0 records out 00:20:34.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689481 s, 5.9 MB/s 00:20:34.289 16:38:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.289 16:38:05 -- common/autotest_common.sh@874 -- # size=4096 00:20:34.289 16:38:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.289 16:38:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:34.289 16:38:05 -- common/autotest_common.sh@877 -- # return 0 00:20:34.289 16:38:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.289 16:38:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:34.289 16:38:05 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:34.289 16:38:05 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:20:34.289 16:38:05 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:34.289 16:38:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:34.289 16:38:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:34.289 16:38:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:34.289 16:38:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:34.289 16:38:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:34.289 16:38:05 -- bdev/nbd_common.sh@12 -- # local i 00:20:34.289 16:38:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:34.289 16:38:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:34.289 16:38:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:34.856 /dev/nbd1 00:20:34.856 16:38:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:34.856 16:38:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:34.856 16:38:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:34.856 16:38:06 -- common/autotest_common.sh@857 -- # local i 00:20:34.856 16:38:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:34.856 16:38:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:34.856 16:38:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:34.856 16:38:06 -- common/autotest_common.sh@861 -- # break 00:20:34.856 16:38:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:34.856 16:38:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:34.856 16:38:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.856 1+0 records in 00:20:34.856 1+0 records out 00:20:34.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672673 s, 6.1 MB/s 00:20:34.856 16:38:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.856 16:38:06 -- common/autotest_common.sh@874 -- # size=4096 00:20:34.856 16:38:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.856 16:38:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:34.856 16:38:06 -- common/autotest_common.sh@877 -- # return 0 00:20:34.856 16:38:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.856 16:38:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:34.856 16:38:06 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:34.856 16:38:06 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:34.856 16:38:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:34.856 16:38:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:34.856 16:38:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:34.856 16:38:06 -- bdev/nbd_common.sh@51 -- # local i 00:20:34.856 16:38:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.856 16:38:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:35.114 16:38:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:35.114 16:38:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:35.114 16:38:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:35.114 16:38:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:35.114 16:38:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.114 16:38:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:35.114 16:38:06 -- bdev/nbd_common.sh@41 -- # break 00:20:35.114 16:38:06 -- bdev/nbd_common.sh@45 -- # return 0 00:20:35.114 16:38:06 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:35.114 16:38:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:35.114 16:38:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:35.114 16:38:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:35.114 16:38:06 -- bdev/nbd_common.sh@51 -- # local i 00:20:35.114 16:38:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:35.115 16:38:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:35.373 16:38:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:35.373 16:38:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:35.373 16:38:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:35.373 16:38:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:35.373 16:38:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.373 16:38:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:35.373 16:38:06 -- bdev/nbd_common.sh@41 -- # break 00:20:35.373 16:38:06 -- bdev/nbd_common.sh@45 -- # return 0 00:20:35.373 16:38:06 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:35.373 16:38:06 -- bdev/bdev_raid.sh@709 -- # killprocess 134127 00:20:35.373 16:38:06 -- common/autotest_common.sh@926 -- # '[' -z 134127 ']' 00:20:35.373 16:38:06 -- common/autotest_common.sh@930 -- # kill -0 134127 00:20:35.373 16:38:06 -- common/autotest_common.sh@931 -- # uname 00:20:35.373 16:38:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:35.373 16:38:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134127 00:20:35.373 16:38:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:35.373 16:38:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:35.373 16:38:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134127' 00:20:35.373 killing process with pid 134127 00:20:35.373 16:38:06 -- common/autotest_common.sh@945 -- # kill 134127 00:20:35.373 Received shutdown signal, test time was about 13.754926 seconds 00:20:35.373 00:20:35.373 Latency(us) 00:20:35.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.373 =================================================================================================================== 00:20:35.373 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.373 16:38:06 -- common/autotest_common.sh@950 -- # wait 134127 00:20:35.373 [2024-07-13 16:38:06.752170] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:35.373 [2024-07-13 16:38:06.802629] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:35.940 00:20:35.940 real 0m18.222s 00:20:35.940 user 0m27.623s 00:20:35.940 sys 0m2.977s 00:20:35.940 16:38:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:35.940 16:38:07 -- common/autotest_common.sh@10 -- # set +x 00:20:35.940 ************************************ 00:20:35.940 END TEST raid_rebuild_test_io 00:20:35.940 ************************************ 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:20:35.940 16:38:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:35.940 16:38:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:35.940 16:38:07 -- common/autotest_common.sh@10 -- # set +x 00:20:35.940 ************************************ 00:20:35.940 START TEST raid_rebuild_test_sb_io 00:20:35.940 ************************************ 00:20:35.940 16:38:07 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@544 -- # raid_pid=134610 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134610 /var/tmp/spdk-raid.sock 00:20:35.940 16:38:07 -- common/autotest_common.sh@819 -- # '[' -z 134610 ']' 00:20:35.940 16:38:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:35.940 16:38:07 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:35.940 16:38:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:35.940 16:38:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:35.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:35.940 16:38:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:35.940 16:38:07 -- common/autotest_common.sh@10 -- # set +x 00:20:35.940 [2024-07-13 16:38:07.403493] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:35.940 [2024-07-13 16:38:07.403766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134610 ] 00:20:35.940 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:35.940 Zero copy mechanism will not be used. 00:20:36.199 [2024-07-13 16:38:07.561350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.199 [2024-07-13 16:38:07.649603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.459 [2024-07-13 16:38:07.731496] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:37.027 16:38:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:37.027 16:38:08 -- common/autotest_common.sh@852 -- # return 0 00:20:37.027 16:38:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:37.027 16:38:08 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:37.027 16:38:08 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:37.284 BaseBdev1_malloc 00:20:37.284 16:38:08 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:37.541 [2024-07-13 16:38:08.848739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:37.541 [2024-07-13 16:38:08.848905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.541 [2024-07-13 16:38:08.848949] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:20:37.541 [2024-07-13 16:38:08.849015] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.541 [2024-07-13 16:38:08.852224] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.541 [2024-07-13 16:38:08.852328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:37.541 BaseBdev1 00:20:37.541 16:38:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:37.541 16:38:08 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:37.541 16:38:08 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:37.799 BaseBdev2_malloc 00:20:37.799 16:38:09 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:38.057 [2024-07-13 16:38:09.341634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:38.057 [2024-07-13 16:38:09.341791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.057 [2024-07-13 16:38:09.341844] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:20:38.057 [2024-07-13 16:38:09.341897] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.057 [2024-07-13 16:38:09.344901] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.057 [2024-07-13 16:38:09.344976] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:38.057 BaseBdev2 00:20:38.057 16:38:09 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:38.314 spare_malloc 00:20:38.314 16:38:09 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:38.573 spare_delay 00:20:38.573 16:38:09 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:38.831 [2024-07-13 16:38:10.058546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:38.831 [2024-07-13 16:38:10.058700] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.831 [2024-07-13 16:38:10.058751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:38.831 [2024-07-13 16:38:10.058812] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.831 [2024-07-13 16:38:10.061962] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.831 [2024-07-13 16:38:10.062037] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:38.831 spare 00:20:38.831 16:38:10 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:39.089 [2024-07-13 16:38:10.330746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:39.089 [2024-07-13 16:38:10.333584] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:39.089 [2024-07-13 16:38:10.333837] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:20:39.089 [2024-07-13 16:38:10.333858] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:39.089 [2024-07-13 16:38:10.334082] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:20:39.089 [2024-07-13 16:38:10.334572] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:20:39.089 [2024-07-13 16:38:10.334591] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:20:39.089 [2024-07-13 16:38:10.334856] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.089 16:38:10 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:39.089 16:38:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:39.089 16:38:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:39.089 16:38:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:39.089 16:38:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:39.089 16:38:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:39.089 16:38:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:39.089 16:38:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:39.089 16:38:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:39.089 16:38:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:39.089 16:38:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.089 16:38:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.349 16:38:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:39.349 "name": "raid_bdev1", 00:20:39.349 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:39.349 "strip_size_kb": 0, 00:20:39.349 "state": "online", 00:20:39.349 "raid_level": "raid1", 00:20:39.349 "superblock": true, 00:20:39.349 "num_base_bdevs": 2, 00:20:39.349 "num_base_bdevs_discovered": 2, 00:20:39.349 "num_base_bdevs_operational": 2, 00:20:39.349 "base_bdevs_list": [ 00:20:39.349 { 00:20:39.349 "name": "BaseBdev1", 00:20:39.349 "uuid": "114106c0-5fb2-5e41-92df-55b1ac2909ef", 00:20:39.349 "is_configured": true, 00:20:39.349 "data_offset": 2048, 00:20:39.349 "data_size": 63488 00:20:39.349 }, 00:20:39.349 { 00:20:39.349 "name": "BaseBdev2", 00:20:39.349 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:39.349 "is_configured": true, 00:20:39.349 "data_offset": 2048, 00:20:39.349 "data_size": 63488 00:20:39.349 } 00:20:39.349 ] 00:20:39.349 }' 00:20:39.349 16:38:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:39.349 16:38:10 -- common/autotest_common.sh@10 -- # set +x 00:20:39.916 16:38:11 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:39.916 16:38:11 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:40.174 [2024-07-13 16:38:11.436643] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.174 16:38:11 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:40.174 16:38:11 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.174 16:38:11 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:40.434 16:38:11 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:40.434 16:38:11 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:40.434 16:38:11 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:40.434 16:38:11 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:40.434 [2024-07-13 16:38:11.848363] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:20:40.434 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:40.434 Zero copy mechanism will not be used. 00:20:40.434 Running I/O for 60 seconds... 00:20:40.693 [2024-07-13 16:38:11.980895] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:40.693 [2024-07-13 16:38:11.987275] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:20:40.693 16:38:12 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:40.693 16:38:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:40.693 16:38:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:40.693 16:38:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:40.693 16:38:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:40.693 16:38:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:40.693 16:38:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:40.693 16:38:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:40.693 16:38:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:40.693 16:38:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:40.693 16:38:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.693 16:38:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.951 16:38:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.951 "name": "raid_bdev1", 00:20:40.951 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:40.951 "strip_size_kb": 0, 00:20:40.951 "state": "online", 00:20:40.951 "raid_level": "raid1", 00:20:40.951 "superblock": true, 00:20:40.951 "num_base_bdevs": 2, 00:20:40.951 "num_base_bdevs_discovered": 1, 00:20:40.951 "num_base_bdevs_operational": 1, 00:20:40.951 "base_bdevs_list": [ 00:20:40.951 { 00:20:40.951 "name": null, 00:20:40.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.951 "is_configured": false, 00:20:40.951 "data_offset": 2048, 00:20:40.951 "data_size": 63488 00:20:40.951 }, 00:20:40.951 { 00:20:40.951 "name": "BaseBdev2", 00:20:40.951 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:40.951 "is_configured": true, 00:20:40.951 "data_offset": 2048, 00:20:40.951 "data_size": 63488 00:20:40.951 } 00:20:40.951 ] 00:20:40.951 }' 00:20:40.951 16:38:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.951 16:38:12 -- common/autotest_common.sh@10 -- # set +x 00:20:41.517 16:38:12 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:41.776 [2024-07-13 16:38:13.186572] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:41.776 [2024-07-13 16:38:13.186679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:41.776 [2024-07-13 16:38:13.235104] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:41.776 [2024-07-13 16:38:13.237955] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:41.776 16:38:13 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:42.053 [2024-07-13 16:38:13.362632] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:42.053 [2024-07-13 16:38:13.363413] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:42.311 [2024-07-13 16:38:13.581370] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:42.311 [2024-07-13 16:38:13.581785] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:42.568 [2024-07-13 16:38:13.918315] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:42.827 [2024-07-13 16:38:14.148658] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:42.827 [2024-07-13 16:38:14.149082] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:42.827 16:38:14 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.827 16:38:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:42.827 16:38:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:42.827 16:38:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:42.827 16:38:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:42.827 16:38:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.827 16:38:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.086 [2024-07-13 16:38:14.502765] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:43.086 [2024-07-13 16:38:14.503460] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:43.086 16:38:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:43.086 "name": "raid_bdev1", 00:20:43.086 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:43.086 "strip_size_kb": 0, 00:20:43.086 "state": "online", 00:20:43.086 "raid_level": "raid1", 00:20:43.086 "superblock": true, 00:20:43.086 "num_base_bdevs": 2, 00:20:43.086 "num_base_bdevs_discovered": 2, 00:20:43.086 "num_base_bdevs_operational": 2, 00:20:43.086 "process": { 00:20:43.086 "type": "rebuild", 00:20:43.086 "target": "spare", 00:20:43.086 "progress": { 00:20:43.086 "blocks": 14336, 00:20:43.086 "percent": 22 00:20:43.086 } 00:20:43.086 }, 00:20:43.086 "base_bdevs_list": [ 00:20:43.086 { 00:20:43.086 "name": "spare", 00:20:43.086 "uuid": "af4aad93-3462-5a5e-9511-7f67d1163360", 00:20:43.086 "is_configured": true, 00:20:43.086 "data_offset": 2048, 00:20:43.086 "data_size": 63488 00:20:43.086 }, 00:20:43.086 { 00:20:43.086 "name": "BaseBdev2", 00:20:43.086 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:43.086 "is_configured": true, 00:20:43.086 "data_offset": 2048, 00:20:43.086 "data_size": 63488 00:20:43.086 } 00:20:43.086 ] 00:20:43.086 }' 00:20:43.086 16:38:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:43.343 16:38:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:43.343 16:38:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:43.343 [2024-07-13 16:38:14.621361] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:43.343 16:38:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.343 16:38:14 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:43.600 [2024-07-13 16:38:14.865212] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:43.600 [2024-07-13 16:38:14.952170] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:43.600 [2024-07-13 16:38:14.952903] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:43.600 [2024-07-13 16:38:14.960044] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:43.600 [2024-07-13 16:38:14.975409] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.600 [2024-07-13 16:38:14.992330] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:20:43.600 16:38:15 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:43.600 16:38:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:43.600 16:38:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:43.600 16:38:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:43.600 16:38:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:43.600 16:38:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:43.600 16:38:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:43.600 16:38:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:43.600 16:38:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:43.600 16:38:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:43.600 16:38:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.600 16:38:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.859 16:38:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:43.859 "name": "raid_bdev1", 00:20:43.859 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:43.859 "strip_size_kb": 0, 00:20:43.859 "state": "online", 00:20:43.859 "raid_level": "raid1", 00:20:43.859 "superblock": true, 00:20:43.859 "num_base_bdevs": 2, 00:20:43.859 "num_base_bdevs_discovered": 1, 00:20:43.859 "num_base_bdevs_operational": 1, 00:20:43.859 "base_bdevs_list": [ 00:20:43.859 { 00:20:43.859 "name": null, 00:20:43.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.859 "is_configured": false, 00:20:43.859 "data_offset": 2048, 00:20:43.859 "data_size": 63488 00:20:43.859 }, 00:20:43.859 { 00:20:43.859 "name": "BaseBdev2", 00:20:43.859 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:43.859 "is_configured": true, 00:20:43.859 "data_offset": 2048, 00:20:43.859 "data_size": 63488 00:20:43.859 } 00:20:43.859 ] 00:20:43.859 }' 00:20:43.859 16:38:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:43.859 16:38:15 -- common/autotest_common.sh@10 -- # set +x 00:20:44.791 16:38:15 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:44.792 16:38:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:44.792 16:38:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:44.792 16:38:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:44.792 16:38:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:44.792 16:38:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.792 16:38:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.049 16:38:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:45.049 "name": "raid_bdev1", 00:20:45.049 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:45.049 "strip_size_kb": 0, 00:20:45.049 "state": "online", 00:20:45.049 "raid_level": "raid1", 00:20:45.049 "superblock": true, 00:20:45.049 "num_base_bdevs": 2, 00:20:45.049 "num_base_bdevs_discovered": 1, 00:20:45.049 "num_base_bdevs_operational": 1, 00:20:45.049 "base_bdevs_list": [ 00:20:45.049 { 00:20:45.049 "name": null, 00:20:45.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.049 "is_configured": false, 00:20:45.049 "data_offset": 2048, 00:20:45.049 "data_size": 63488 00:20:45.049 }, 00:20:45.049 { 00:20:45.049 "name": "BaseBdev2", 00:20:45.049 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:45.049 "is_configured": true, 00:20:45.049 "data_offset": 2048, 00:20:45.049 "data_size": 63488 00:20:45.049 } 00:20:45.049 ] 00:20:45.049 }' 00:20:45.049 16:38:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:45.049 16:38:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:45.049 16:38:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:45.049 16:38:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:45.049 16:38:16 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:45.307 [2024-07-13 16:38:16.628952] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:45.307 [2024-07-13 16:38:16.629059] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:45.307 16:38:16 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:45.307 [2024-07-13 16:38:16.678177] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:20:45.307 [2024-07-13 16:38:16.680938] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:45.565 [2024-07-13 16:38:16.794880] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:45.565 [2024-07-13 16:38:16.795564] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:45.565 [2024-07-13 16:38:17.034740] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:45.565 [2024-07-13 16:38:17.035166] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:46.132 [2024-07-13 16:38:17.401717] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:46.390 [2024-07-13 16:38:17.635535] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:46.390 16:38:17 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:46.390 16:38:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:46.390 16:38:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:46.390 16:38:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:46.390 16:38:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:46.390 16:38:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.390 16:38:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.648 [2024-07-13 16:38:17.908759] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:46.648 16:38:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:46.648 "name": "raid_bdev1", 00:20:46.648 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:46.648 "strip_size_kb": 0, 00:20:46.648 "state": "online", 00:20:46.648 "raid_level": "raid1", 00:20:46.648 "superblock": true, 00:20:46.648 "num_base_bdevs": 2, 00:20:46.648 "num_base_bdevs_discovered": 2, 00:20:46.648 "num_base_bdevs_operational": 2, 00:20:46.648 "process": { 00:20:46.648 "type": "rebuild", 00:20:46.648 "target": "spare", 00:20:46.648 "progress": { 00:20:46.648 "blocks": 12288, 00:20:46.648 "percent": 19 00:20:46.648 } 00:20:46.648 }, 00:20:46.648 "base_bdevs_list": [ 00:20:46.648 { 00:20:46.648 "name": "spare", 00:20:46.648 "uuid": "af4aad93-3462-5a5e-9511-7f67d1163360", 00:20:46.648 "is_configured": true, 00:20:46.648 "data_offset": 2048, 00:20:46.648 "data_size": 63488 00:20:46.648 }, 00:20:46.648 { 00:20:46.648 "name": "BaseBdev2", 00:20:46.648 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:46.648 "is_configured": true, 00:20:46.648 "data_offset": 2048, 00:20:46.648 "data_size": 63488 00:20:46.649 } 00:20:46.649 ] 00:20:46.649 }' 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:46.649 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@657 -- # local timeout=434 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:46.649 16:38:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.649 16:38:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.649 [2024-07-13 16:38:18.020303] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:46.907 16:38:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:46.907 "name": "raid_bdev1", 00:20:46.907 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:46.907 "strip_size_kb": 0, 00:20:46.907 "state": "online", 00:20:46.907 "raid_level": "raid1", 00:20:46.907 "superblock": true, 00:20:46.907 "num_base_bdevs": 2, 00:20:46.907 "num_base_bdevs_discovered": 2, 00:20:46.907 "num_base_bdevs_operational": 2, 00:20:46.907 "process": { 00:20:46.907 "type": "rebuild", 00:20:46.907 "target": "spare", 00:20:46.907 "progress": { 00:20:46.907 "blocks": 16384, 00:20:46.907 "percent": 25 00:20:46.907 } 00:20:46.907 }, 00:20:46.907 "base_bdevs_list": [ 00:20:46.907 { 00:20:46.907 "name": "spare", 00:20:46.907 "uuid": "af4aad93-3462-5a5e-9511-7f67d1163360", 00:20:46.907 "is_configured": true, 00:20:46.907 "data_offset": 2048, 00:20:46.907 "data_size": 63488 00:20:46.907 }, 00:20:46.907 { 00:20:46.907 "name": "BaseBdev2", 00:20:46.907 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:46.907 "is_configured": true, 00:20:46.907 "data_offset": 2048, 00:20:46.907 "data_size": 63488 00:20:46.907 } 00:20:46.907 ] 00:20:46.907 }' 00:20:46.907 16:38:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:46.907 16:38:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:46.907 16:38:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:46.907 16:38:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:46.907 16:38:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:46.907 [2024-07-13 16:38:18.361786] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:47.165 [2024-07-13 16:38:18.565541] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:47.165 [2024-07-13 16:38:18.565918] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:47.423 [2024-07-13 16:38:18.881674] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:47.680 [2024-07-13 16:38:18.993795] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:47.938 [2024-07-13 16:38:19.305706] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:47.938 [2024-07-13 16:38:19.306381] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:47.938 16:38:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:47.938 16:38:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.938 16:38:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:47.938 16:38:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:47.938 16:38:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:47.938 16:38:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:47.938 16:38:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.938 16:38:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.194 16:38:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:48.194 "name": "raid_bdev1", 00:20:48.194 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:48.194 "strip_size_kb": 0, 00:20:48.194 "state": "online", 00:20:48.194 "raid_level": "raid1", 00:20:48.194 "superblock": true, 00:20:48.195 "num_base_bdevs": 2, 00:20:48.195 "num_base_bdevs_discovered": 2, 00:20:48.195 "num_base_bdevs_operational": 2, 00:20:48.195 "process": { 00:20:48.195 "type": "rebuild", 00:20:48.195 "target": "spare", 00:20:48.195 "progress": { 00:20:48.195 "blocks": 34816, 00:20:48.195 "percent": 54 00:20:48.195 } 00:20:48.195 }, 00:20:48.195 "base_bdevs_list": [ 00:20:48.195 { 00:20:48.195 "name": "spare", 00:20:48.195 "uuid": "af4aad93-3462-5a5e-9511-7f67d1163360", 00:20:48.195 "is_configured": true, 00:20:48.195 "data_offset": 2048, 00:20:48.195 "data_size": 63488 00:20:48.195 }, 00:20:48.195 { 00:20:48.195 "name": "BaseBdev2", 00:20:48.195 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:48.195 "is_configured": true, 00:20:48.195 "data_offset": 2048, 00:20:48.195 "data_size": 63488 00:20:48.195 } 00:20:48.195 ] 00:20:48.195 }' 00:20:48.195 16:38:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:48.195 16:38:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:48.195 16:38:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:48.195 16:38:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.195 16:38:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:48.453 [2024-07-13 16:38:19.741577] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:48.453 [2024-07-13 16:38:19.742266] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:48.711 [2024-07-13 16:38:19.959289] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:48.969 [2024-07-13 16:38:20.265443] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:49.228 16:38:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:49.228 16:38:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.228 16:38:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:49.228 16:38:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:49.228 16:38:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:49.228 16:38:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:49.228 [2024-07-13 16:38:20.671236] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:49.228 16:38:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.228 16:38:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.487 16:38:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:49.487 "name": "raid_bdev1", 00:20:49.487 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:49.487 "strip_size_kb": 0, 00:20:49.487 "state": "online", 00:20:49.487 "raid_level": "raid1", 00:20:49.487 "superblock": true, 00:20:49.487 "num_base_bdevs": 2, 00:20:49.487 "num_base_bdevs_discovered": 2, 00:20:49.487 "num_base_bdevs_operational": 2, 00:20:49.487 "process": { 00:20:49.487 "type": "rebuild", 00:20:49.487 "target": "spare", 00:20:49.487 "progress": { 00:20:49.487 "blocks": 53248, 00:20:49.487 "percent": 83 00:20:49.487 } 00:20:49.487 }, 00:20:49.487 "base_bdevs_list": [ 00:20:49.487 { 00:20:49.487 "name": "spare", 00:20:49.487 "uuid": "af4aad93-3462-5a5e-9511-7f67d1163360", 00:20:49.487 "is_configured": true, 00:20:49.487 "data_offset": 2048, 00:20:49.487 "data_size": 63488 00:20:49.487 }, 00:20:49.487 { 00:20:49.487 "name": "BaseBdev2", 00:20:49.487 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:49.487 "is_configured": true, 00:20:49.487 "data_offset": 2048, 00:20:49.487 "data_size": 63488 00:20:49.487 } 00:20:49.487 ] 00:20:49.487 }' 00:20:49.487 16:38:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:49.487 16:38:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:49.487 16:38:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:49.745 16:38:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.745 16:38:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:49.745 [2024-07-13 16:38:21.007064] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:50.004 [2024-07-13 16:38:21.342311] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:50.004 [2024-07-13 16:38:21.449054] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:50.004 [2024-07-13 16:38:21.452902] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.573 16:38:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:50.573 16:38:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.573 16:38:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:50.573 16:38:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:50.573 16:38:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:50.573 16:38:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:50.573 16:38:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.573 16:38:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.832 16:38:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:50.832 "name": "raid_bdev1", 00:20:50.832 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:50.832 "strip_size_kb": 0, 00:20:50.832 "state": "online", 00:20:50.832 "raid_level": "raid1", 00:20:50.832 "superblock": true, 00:20:50.832 "num_base_bdevs": 2, 00:20:50.832 "num_base_bdevs_discovered": 2, 00:20:50.832 "num_base_bdevs_operational": 2, 00:20:50.832 "base_bdevs_list": [ 00:20:50.832 { 00:20:50.832 "name": "spare", 00:20:50.832 "uuid": "af4aad93-3462-5a5e-9511-7f67d1163360", 00:20:50.832 "is_configured": true, 00:20:50.832 "data_offset": 2048, 00:20:50.832 "data_size": 63488 00:20:50.832 }, 00:20:50.832 { 00:20:50.832 "name": "BaseBdev2", 00:20:50.832 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:50.832 "is_configured": true, 00:20:50.832 "data_offset": 2048, 00:20:50.832 "data_size": 63488 00:20:50.832 } 00:20:50.832 ] 00:20:50.832 }' 00:20:50.832 16:38:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:51.091 16:38:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:51.091 16:38:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:51.091 16:38:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:51.091 16:38:22 -- bdev/bdev_raid.sh@660 -- # break 00:20:51.091 16:38:22 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.091 16:38:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:51.091 16:38:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:51.091 16:38:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:51.091 16:38:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:51.091 16:38:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.091 16:38:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:51.348 "name": "raid_bdev1", 00:20:51.348 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:51.348 "strip_size_kb": 0, 00:20:51.348 "state": "online", 00:20:51.348 "raid_level": "raid1", 00:20:51.348 "superblock": true, 00:20:51.348 "num_base_bdevs": 2, 00:20:51.348 "num_base_bdevs_discovered": 2, 00:20:51.348 "num_base_bdevs_operational": 2, 00:20:51.348 "base_bdevs_list": [ 00:20:51.348 { 00:20:51.348 "name": "spare", 00:20:51.348 "uuid": "af4aad93-3462-5a5e-9511-7f67d1163360", 00:20:51.348 "is_configured": true, 00:20:51.348 "data_offset": 2048, 00:20:51.348 "data_size": 63488 00:20:51.348 }, 00:20:51.348 { 00:20:51.348 "name": "BaseBdev2", 00:20:51.348 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:51.348 "is_configured": true, 00:20:51.348 "data_offset": 2048, 00:20:51.348 "data_size": 63488 00:20:51.348 } 00:20:51.348 ] 00:20:51.348 }' 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.348 16:38:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.606 16:38:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:51.606 "name": "raid_bdev1", 00:20:51.606 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:51.606 "strip_size_kb": 0, 00:20:51.606 "state": "online", 00:20:51.606 "raid_level": "raid1", 00:20:51.606 "superblock": true, 00:20:51.606 "num_base_bdevs": 2, 00:20:51.606 "num_base_bdevs_discovered": 2, 00:20:51.606 "num_base_bdevs_operational": 2, 00:20:51.606 "base_bdevs_list": [ 00:20:51.606 { 00:20:51.606 "name": "spare", 00:20:51.606 "uuid": "af4aad93-3462-5a5e-9511-7f67d1163360", 00:20:51.606 "is_configured": true, 00:20:51.606 "data_offset": 2048, 00:20:51.606 "data_size": 63488 00:20:51.606 }, 00:20:51.606 { 00:20:51.606 "name": "BaseBdev2", 00:20:51.606 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:51.606 "is_configured": true, 00:20:51.606 "data_offset": 2048, 00:20:51.606 "data_size": 63488 00:20:51.606 } 00:20:51.606 ] 00:20:51.606 }' 00:20:51.606 16:38:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:51.606 16:38:23 -- common/autotest_common.sh@10 -- # set +x 00:20:52.199 16:38:23 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:52.463 [2024-07-13 16:38:23.875968] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:52.463 [2024-07-13 16:38:23.876031] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:52.721 00:20:52.721 Latency(us) 00:20:52.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.721 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:52.721 raid_bdev1 : 12.12 106.63 319.89 0.00 0.00 13093.57 372.54 114844.04 00:20:52.721 =================================================================================================================== 00:20:52.721 Total : 106.63 319.89 0.00 0.00 13093.57 372.54 114844.04 00:20:52.721 [2024-07-13 16:38:23.973863] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.721 [2024-07-13 16:38:23.973938] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:52.721 [2024-07-13 16:38:23.974066] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:52.721 [2024-07-13 16:38:23.974079] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:20:52.721 0 00:20:52.721 16:38:24 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.721 16:38:24 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:52.979 16:38:24 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:52.979 16:38:24 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:52.979 16:38:24 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:52.979 16:38:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:52.979 16:38:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:52.979 16:38:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:52.979 16:38:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:52.979 16:38:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:52.979 16:38:24 -- bdev/nbd_common.sh@12 -- # local i 00:20:52.979 16:38:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:52.979 16:38:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:52.979 16:38:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:53.237 /dev/nbd0 00:20:53.237 16:38:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:53.237 16:38:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:53.237 16:38:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:53.237 16:38:24 -- common/autotest_common.sh@857 -- # local i 00:20:53.237 16:38:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:53.237 16:38:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:53.237 16:38:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:53.237 16:38:24 -- common/autotest_common.sh@861 -- # break 00:20:53.237 16:38:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:53.237 16:38:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:53.237 16:38:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:53.237 1+0 records in 00:20:53.237 1+0 records out 00:20:53.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459284 s, 8.9 MB/s 00:20:53.237 16:38:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.237 16:38:24 -- common/autotest_common.sh@874 -- # size=4096 00:20:53.237 16:38:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.237 16:38:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:53.237 16:38:24 -- common/autotest_common.sh@877 -- # return 0 00:20:53.237 16:38:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:53.237 16:38:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:53.237 16:38:24 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:53.237 16:38:24 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:20:53.237 16:38:24 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:53.237 16:38:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:53.237 16:38:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:53.237 16:38:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:53.237 16:38:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:53.237 16:38:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:53.237 16:38:24 -- bdev/nbd_common.sh@12 -- # local i 00:20:53.237 16:38:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:53.237 16:38:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:53.237 16:38:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:53.495 /dev/nbd1 00:20:53.495 16:38:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:53.495 16:38:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:53.495 16:38:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:53.495 16:38:24 -- common/autotest_common.sh@857 -- # local i 00:20:53.495 16:38:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:53.496 16:38:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:53.496 16:38:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:53.496 16:38:24 -- common/autotest_common.sh@861 -- # break 00:20:53.496 16:38:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:53.496 16:38:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:53.496 16:38:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:53.496 1+0 records in 00:20:53.496 1+0 records out 00:20:53.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307943 s, 13.3 MB/s 00:20:53.496 16:38:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.496 16:38:24 -- common/autotest_common.sh@874 -- # size=4096 00:20:53.496 16:38:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.496 16:38:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:53.496 16:38:24 -- common/autotest_common.sh@877 -- # return 0 00:20:53.496 16:38:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:53.496 16:38:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:53.496 16:38:24 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:53.754 16:38:25 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:53.754 16:38:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:53.754 16:38:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:53.754 16:38:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:53.754 16:38:25 -- bdev/nbd_common.sh@51 -- # local i 00:20:53.754 16:38:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:53.754 16:38:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@41 -- # break 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.012 16:38:25 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@51 -- # local i 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:54.012 16:38:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:54.271 16:38:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:54.271 16:38:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:54.271 16:38:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:54.271 16:38:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.271 16:38:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:54.271 16:38:25 -- bdev/nbd_common.sh@41 -- # break 00:20:54.271 16:38:25 -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.271 16:38:25 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:54.271 16:38:25 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:54.271 16:38:25 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:54.271 16:38:25 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:54.528 16:38:25 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:54.786 [2024-07-13 16:38:26.011523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:54.786 [2024-07-13 16:38:26.011681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.786 [2024-07-13 16:38:26.011725] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:54.786 [2024-07-13 16:38:26.011761] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.786 [2024-07-13 16:38:26.014979] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.786 [2024-07-13 16:38:26.015096] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:54.786 [2024-07-13 16:38:26.015219] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:54.786 [2024-07-13 16:38:26.015292] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.786 BaseBdev1 00:20:54.786 16:38:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:54.786 16:38:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:54.786 16:38:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:55.044 16:38:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:55.044 [2024-07-13 16:38:26.451699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:55.044 [2024-07-13 16:38:26.451844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.044 [2024-07-13 16:38:26.451885] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:55.044 [2024-07-13 16:38:26.451916] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.044 [2024-07-13 16:38:26.452467] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.044 [2024-07-13 16:38:26.452540] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:55.044 [2024-07-13 16:38:26.452647] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:55.044 [2024-07-13 16:38:26.452661] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:55.044 [2024-07-13 16:38:26.452669] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:55.044 [2024-07-13 16:38:26.452699] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state configuring 00:20:55.044 [2024-07-13 16:38:26.452766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:55.044 BaseBdev2 00:20:55.044 16:38:26 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:55.303 16:38:26 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:55.561 [2024-07-13 16:38:26.871834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:55.561 [2024-07-13 16:38:26.871953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.561 [2024-07-13 16:38:26.872015] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:55.561 [2024-07-13 16:38:26.872047] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.561 [2024-07-13 16:38:26.872614] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.561 [2024-07-13 16:38:26.872673] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:55.561 [2024-07-13 16:38:26.872784] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:55.561 [2024-07-13 16:38:26.872835] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:55.561 spare 00:20:55.562 16:38:26 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:55.562 16:38:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:55.562 16:38:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:55.562 16:38:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:55.562 16:38:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:55.562 16:38:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:55.562 16:38:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:55.562 16:38:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:55.562 16:38:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:55.562 16:38:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:55.562 16:38:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.562 16:38:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.562 [2024-07-13 16:38:26.972954] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:20:55.562 [2024-07-13 16:38:26.973000] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:55.562 [2024-07-13 16:38:26.973193] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:20:55.562 [2024-07-13 16:38:26.973690] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:20:55.562 [2024-07-13 16:38:26.973712] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:20:55.562 [2024-07-13 16:38:26.973855] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.821 16:38:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.821 "name": "raid_bdev1", 00:20:55.821 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:55.821 "strip_size_kb": 0, 00:20:55.821 "state": "online", 00:20:55.821 "raid_level": "raid1", 00:20:55.821 "superblock": true, 00:20:55.821 "num_base_bdevs": 2, 00:20:55.821 "num_base_bdevs_discovered": 2, 00:20:55.821 "num_base_bdevs_operational": 2, 00:20:55.821 "base_bdevs_list": [ 00:20:55.821 { 00:20:55.821 "name": "spare", 00:20:55.821 "uuid": "af4aad93-3462-5a5e-9511-7f67d1163360", 00:20:55.821 "is_configured": true, 00:20:55.821 "data_offset": 2048, 00:20:55.821 "data_size": 63488 00:20:55.821 }, 00:20:55.821 { 00:20:55.821 "name": "BaseBdev2", 00:20:55.821 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:55.821 "is_configured": true, 00:20:55.821 "data_offset": 2048, 00:20:55.821 "data_size": 63488 00:20:55.821 } 00:20:55.821 ] 00:20:55.821 }' 00:20:55.821 16:38:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.821 16:38:27 -- common/autotest_common.sh@10 -- # set +x 00:20:56.388 16:38:27 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:56.388 16:38:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:56.388 16:38:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:56.388 16:38:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:56.388 16:38:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:56.388 16:38:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.388 16:38:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.647 16:38:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:56.647 "name": "raid_bdev1", 00:20:56.647 "uuid": "309d057a-c46c-4864-9e58-a1cd5fde9a2e", 00:20:56.647 "strip_size_kb": 0, 00:20:56.647 "state": "online", 00:20:56.647 "raid_level": "raid1", 00:20:56.647 "superblock": true, 00:20:56.647 "num_base_bdevs": 2, 00:20:56.647 "num_base_bdevs_discovered": 2, 00:20:56.647 "num_base_bdevs_operational": 2, 00:20:56.647 "base_bdevs_list": [ 00:20:56.647 { 00:20:56.647 "name": "spare", 00:20:56.647 "uuid": "af4aad93-3462-5a5e-9511-7f67d1163360", 00:20:56.647 "is_configured": true, 00:20:56.647 "data_offset": 2048, 00:20:56.647 "data_size": 63488 00:20:56.647 }, 00:20:56.647 { 00:20:56.647 "name": "BaseBdev2", 00:20:56.647 "uuid": "8ac4f4de-8a4b-5e73-bc0a-1a3362977e44", 00:20:56.647 "is_configured": true, 00:20:56.647 "data_offset": 2048, 00:20:56.647 "data_size": 63488 00:20:56.647 } 00:20:56.647 ] 00:20:56.647 }' 00:20:56.647 16:38:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:56.647 16:38:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:56.647 16:38:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:56.647 16:38:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:56.647 16:38:27 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.647 16:38:27 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:56.906 16:38:28 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:56.906 16:38:28 -- bdev/bdev_raid.sh@709 -- # killprocess 134610 00:20:56.906 16:38:28 -- common/autotest_common.sh@926 -- # '[' -z 134610 ']' 00:20:56.906 16:38:28 -- common/autotest_common.sh@930 -- # kill -0 134610 00:20:56.906 16:38:28 -- common/autotest_common.sh@931 -- # uname 00:20:56.906 16:38:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:56.906 16:38:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134610 00:20:56.906 killing process with pid 134610 00:20:56.906 Received shutdown signal, test time was about 16.403423 seconds 00:20:56.906 00:20:56.906 Latency(us) 00:20:56.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.906 =================================================================================================================== 00:20:56.906 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.906 16:38:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:56.906 16:38:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:56.906 16:38:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134610' 00:20:56.906 16:38:28 -- common/autotest_common.sh@945 -- # kill 134610 00:20:56.906 16:38:28 -- common/autotest_common.sh@950 -- # wait 134610 00:20:56.906 [2024-07-13 16:38:28.254794] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.906 [2024-07-13 16:38:28.254929] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.906 [2024-07-13 16:38:28.255010] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.906 [2024-07-13 16:38:28.255020] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:20:56.906 [2024-07-13 16:38:28.305321] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:57.472 ************************************ 00:20:57.472 END TEST raid_rebuild_test_sb_io 00:20:57.472 ************************************ 00:20:57.472 16:38:28 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:57.472 00:20:57.472 real 0m21.427s 00:20:57.472 user 0m33.574s 00:20:57.472 sys 0m3.433s 00:20:57.472 16:38:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.472 16:38:28 -- common/autotest_common.sh@10 -- # set +x 00:20:57.472 16:38:28 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:20:57.472 16:38:28 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:20:57.472 16:38:28 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:57.472 16:38:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:57.472 16:38:28 -- common/autotest_common.sh@10 -- # set +x 00:20:57.472 ************************************ 00:20:57.472 START TEST raid_rebuild_test 00:20:57.472 ************************************ 00:20:57.472 16:38:28 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:20:57.472 16:38:28 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:57.472 16:38:28 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:57.472 16:38:28 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:57.472 16:38:28 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:57.472 16:38:28 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:57.472 16:38:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:57.472 16:38:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:57.472 16:38:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:57.472 16:38:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@544 -- # raid_pid=135179 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135179 /var/tmp/spdk-raid.sock 00:20:57.473 16:38:28 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:57.473 16:38:28 -- common/autotest_common.sh@819 -- # '[' -z 135179 ']' 00:20:57.473 16:38:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:57.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:57.473 16:38:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:57.473 16:38:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:57.473 16:38:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:57.473 16:38:28 -- common/autotest_common.sh@10 -- # set +x 00:20:57.473 [2024-07-13 16:38:28.909576] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:57.473 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:57.473 Zero copy mechanism will not be used. 00:20:57.473 [2024-07-13 16:38:28.909896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135179 ] 00:20:57.731 [2024-07-13 16:38:29.067903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.731 [2024-07-13 16:38:29.151827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.990 [2024-07-13 16:38:29.234154] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.557 16:38:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:58.557 16:38:29 -- common/autotest_common.sh@852 -- # return 0 00:20:58.557 16:38:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:58.557 16:38:29 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:58.557 16:38:29 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:58.815 BaseBdev1 00:20:58.815 16:38:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:58.815 16:38:30 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:58.816 16:38:30 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:59.074 BaseBdev2 00:20:59.074 16:38:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:59.074 16:38:30 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:59.074 16:38:30 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:59.332 BaseBdev3 00:20:59.332 16:38:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:59.332 16:38:30 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:59.332 16:38:30 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:59.591 BaseBdev4 00:20:59.591 16:38:30 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:59.849 spare_malloc 00:20:59.849 16:38:31 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:00.107 spare_delay 00:21:00.107 16:38:31 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:00.107 [2024-07-13 16:38:31.552194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:00.107 [2024-07-13 16:38:31.552378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.107 [2024-07-13 16:38:31.552430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:00.107 [2024-07-13 16:38:31.552496] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.107 [2024-07-13 16:38:31.555827] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.107 [2024-07-13 16:38:31.555932] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:00.107 spare 00:21:00.107 16:38:31 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:00.365 [2024-07-13 16:38:31.772385] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:00.365 [2024-07-13 16:38:31.775099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:00.365 [2024-07-13 16:38:31.775177] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:00.365 [2024-07-13 16:38:31.775210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:00.365 [2024-07-13 16:38:31.775305] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:21:00.365 [2024-07-13 16:38:31.775315] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:00.365 [2024-07-13 16:38:31.775520] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:21:00.365 [2024-07-13 16:38:31.775999] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:21:00.365 [2024-07-13 16:38:31.776011] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:21:00.365 [2024-07-13 16:38:31.776319] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.365 16:38:31 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:00.365 16:38:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:00.365 16:38:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:00.365 16:38:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:00.365 16:38:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:00.365 16:38:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:00.365 16:38:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:00.365 16:38:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:00.365 16:38:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:00.365 16:38:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:00.365 16:38:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.365 16:38:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.623 16:38:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:00.623 "name": "raid_bdev1", 00:21:00.623 "uuid": "013e22a5-b5bd-4150-af2e-415de188a9f7", 00:21:00.623 "strip_size_kb": 0, 00:21:00.623 "state": "online", 00:21:00.623 "raid_level": "raid1", 00:21:00.623 "superblock": false, 00:21:00.623 "num_base_bdevs": 4, 00:21:00.623 "num_base_bdevs_discovered": 4, 00:21:00.623 "num_base_bdevs_operational": 4, 00:21:00.623 "base_bdevs_list": [ 00:21:00.623 { 00:21:00.623 "name": "BaseBdev1", 00:21:00.623 "uuid": "dfa21cc0-dea9-4929-81ec-b2d169b4fd9b", 00:21:00.623 "is_configured": true, 00:21:00.623 "data_offset": 0, 00:21:00.623 "data_size": 65536 00:21:00.623 }, 00:21:00.623 { 00:21:00.623 "name": "BaseBdev2", 00:21:00.623 "uuid": "2b36208f-9f52-4ae9-8f46-7c0cc8a74ff1", 00:21:00.623 "is_configured": true, 00:21:00.623 "data_offset": 0, 00:21:00.623 "data_size": 65536 00:21:00.623 }, 00:21:00.623 { 00:21:00.623 "name": "BaseBdev3", 00:21:00.623 "uuid": "3d28f41e-7ad0-4ffc-b3a7-b40c9efe0437", 00:21:00.623 "is_configured": true, 00:21:00.623 "data_offset": 0, 00:21:00.623 "data_size": 65536 00:21:00.623 }, 00:21:00.623 { 00:21:00.623 "name": "BaseBdev4", 00:21:00.623 "uuid": "0ef9c738-41a4-4fb4-b703-37daf4c8c422", 00:21:00.623 "is_configured": true, 00:21:00.623 "data_offset": 0, 00:21:00.623 "data_size": 65536 00:21:00.623 } 00:21:00.623 ] 00:21:00.623 }' 00:21:00.623 16:38:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:00.623 16:38:32 -- common/autotest_common.sh@10 -- # set +x 00:21:01.187 16:38:32 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:01.187 16:38:32 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:01.445 [2024-07-13 16:38:32.832826] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:01.445 16:38:32 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:01.445 16:38:32 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.445 16:38:32 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:01.704 16:38:33 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:01.704 16:38:33 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:01.704 16:38:33 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:01.704 16:38:33 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:01.704 16:38:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:01.704 16:38:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:01.704 16:38:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:01.704 16:38:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:01.704 16:38:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:01.704 16:38:33 -- bdev/nbd_common.sh@12 -- # local i 00:21:01.704 16:38:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:01.704 16:38:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:01.704 16:38:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:01.965 [2024-07-13 16:38:33.392776] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:01.965 /dev/nbd0 00:21:02.224 16:38:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:02.224 16:38:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:02.224 16:38:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:02.224 16:38:33 -- common/autotest_common.sh@857 -- # local i 00:21:02.224 16:38:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:02.224 16:38:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:02.224 16:38:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:02.224 16:38:33 -- common/autotest_common.sh@861 -- # break 00:21:02.224 16:38:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:02.224 16:38:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:02.224 16:38:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:02.224 1+0 records in 00:21:02.224 1+0 records out 00:21:02.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454996 s, 9.0 MB/s 00:21:02.224 16:38:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.224 16:38:33 -- common/autotest_common.sh@874 -- # size=4096 00:21:02.224 16:38:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.224 16:38:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:02.224 16:38:33 -- common/autotest_common.sh@877 -- # return 0 00:21:02.224 16:38:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:02.224 16:38:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:02.224 16:38:33 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:02.224 16:38:33 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:02.224 16:38:33 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:08.887 65536+0 records in 00:21:08.887 65536+0 records out 00:21:08.887 33554432 bytes (34 MB, 32 MiB) copied, 5.91275 s, 5.7 MB/s 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@51 -- # local i 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:08.887 [2024-07-13 16:38:39.624018] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@41 -- # break 00:21:08.887 16:38:39 -- bdev/nbd_common.sh@45 -- # return 0 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:08.887 [2024-07-13 16:38:39.895667] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.887 16:38:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.887 16:38:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:08.887 "name": "raid_bdev1", 00:21:08.887 "uuid": "013e22a5-b5bd-4150-af2e-415de188a9f7", 00:21:08.887 "strip_size_kb": 0, 00:21:08.887 "state": "online", 00:21:08.887 "raid_level": "raid1", 00:21:08.887 "superblock": false, 00:21:08.887 "num_base_bdevs": 4, 00:21:08.887 "num_base_bdevs_discovered": 3, 00:21:08.887 "num_base_bdevs_operational": 3, 00:21:08.887 "base_bdevs_list": [ 00:21:08.887 { 00:21:08.887 "name": null, 00:21:08.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.887 "is_configured": false, 00:21:08.887 "data_offset": 0, 00:21:08.887 "data_size": 65536 00:21:08.887 }, 00:21:08.887 { 00:21:08.887 "name": "BaseBdev2", 00:21:08.887 "uuid": "2b36208f-9f52-4ae9-8f46-7c0cc8a74ff1", 00:21:08.887 "is_configured": true, 00:21:08.887 "data_offset": 0, 00:21:08.887 "data_size": 65536 00:21:08.887 }, 00:21:08.887 { 00:21:08.887 "name": "BaseBdev3", 00:21:08.887 "uuid": "3d28f41e-7ad0-4ffc-b3a7-b40c9efe0437", 00:21:08.887 "is_configured": true, 00:21:08.887 "data_offset": 0, 00:21:08.887 "data_size": 65536 00:21:08.887 }, 00:21:08.887 { 00:21:08.887 "name": "BaseBdev4", 00:21:08.887 "uuid": "0ef9c738-41a4-4fb4-b703-37daf4c8c422", 00:21:08.887 "is_configured": true, 00:21:08.887 "data_offset": 0, 00:21:08.887 "data_size": 65536 00:21:08.887 } 00:21:08.887 ] 00:21:08.887 }' 00:21:08.887 16:38:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:08.887 16:38:40 -- common/autotest_common.sh@10 -- # set +x 00:21:09.454 16:38:40 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:09.712 [2024-07-13 16:38:41.044010] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:09.712 [2024-07-13 16:38:41.044111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:09.712 [2024-07-13 16:38:41.050787] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080 00:21:09.712 [2024-07-13 16:38:41.053487] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:09.712 16:38:41 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:10.647 16:38:42 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.647 16:38:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:10.647 16:38:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:10.647 16:38:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:10.647 16:38:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:10.647 16:38:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.647 16:38:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.905 16:38:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:10.905 "name": "raid_bdev1", 00:21:10.905 "uuid": "013e22a5-b5bd-4150-af2e-415de188a9f7", 00:21:10.905 "strip_size_kb": 0, 00:21:10.905 "state": "online", 00:21:10.905 "raid_level": "raid1", 00:21:10.905 "superblock": false, 00:21:10.905 "num_base_bdevs": 4, 00:21:10.905 "num_base_bdevs_discovered": 4, 00:21:10.905 "num_base_bdevs_operational": 4, 00:21:10.905 "process": { 00:21:10.905 "type": "rebuild", 00:21:10.905 "target": "spare", 00:21:10.905 "progress": { 00:21:10.905 "blocks": 24576, 00:21:10.905 "percent": 37 00:21:10.905 } 00:21:10.905 }, 00:21:10.905 "base_bdevs_list": [ 00:21:10.905 { 00:21:10.905 "name": "spare", 00:21:10.905 "uuid": "d6d13a42-f20f-5efd-8106-897e686afb34", 00:21:10.905 "is_configured": true, 00:21:10.905 "data_offset": 0, 00:21:10.905 "data_size": 65536 00:21:10.905 }, 00:21:10.905 { 00:21:10.905 "name": "BaseBdev2", 00:21:10.905 "uuid": "2b36208f-9f52-4ae9-8f46-7c0cc8a74ff1", 00:21:10.905 "is_configured": true, 00:21:10.905 "data_offset": 0, 00:21:10.905 "data_size": 65536 00:21:10.905 }, 00:21:10.905 { 00:21:10.905 "name": "BaseBdev3", 00:21:10.905 "uuid": "3d28f41e-7ad0-4ffc-b3a7-b40c9efe0437", 00:21:10.905 "is_configured": true, 00:21:10.905 "data_offset": 0, 00:21:10.905 "data_size": 65536 00:21:10.905 }, 00:21:10.905 { 00:21:10.905 "name": "BaseBdev4", 00:21:10.905 "uuid": "0ef9c738-41a4-4fb4-b703-37daf4c8c422", 00:21:10.905 "is_configured": true, 00:21:10.905 "data_offset": 0, 00:21:10.905 "data_size": 65536 00:21:10.905 } 00:21:10.905 ] 00:21:10.905 }' 00:21:10.905 16:38:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:11.163 16:38:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.163 16:38:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:11.163 16:38:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.163 16:38:42 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:11.420 [2024-07-13 16:38:42.675474] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:11.421 [2024-07-13 16:38:42.768203] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:11.421 [2024-07-13 16:38:42.768388] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.421 16:38:42 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:11.421 16:38:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:11.421 16:38:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:11.421 16:38:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:11.421 16:38:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:11.421 16:38:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:11.421 16:38:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:11.421 16:38:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:11.421 16:38:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:11.421 16:38:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:11.421 16:38:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.421 16:38:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.679 16:38:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:11.679 "name": "raid_bdev1", 00:21:11.679 "uuid": "013e22a5-b5bd-4150-af2e-415de188a9f7", 00:21:11.679 "strip_size_kb": 0, 00:21:11.679 "state": "online", 00:21:11.679 "raid_level": "raid1", 00:21:11.679 "superblock": false, 00:21:11.679 "num_base_bdevs": 4, 00:21:11.679 "num_base_bdevs_discovered": 3, 00:21:11.679 "num_base_bdevs_operational": 3, 00:21:11.679 "base_bdevs_list": [ 00:21:11.679 { 00:21:11.679 "name": null, 00:21:11.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.679 "is_configured": false, 00:21:11.679 "data_offset": 0, 00:21:11.679 "data_size": 65536 00:21:11.679 }, 00:21:11.679 { 00:21:11.679 "name": "BaseBdev2", 00:21:11.679 "uuid": "2b36208f-9f52-4ae9-8f46-7c0cc8a74ff1", 00:21:11.679 "is_configured": true, 00:21:11.679 "data_offset": 0, 00:21:11.679 "data_size": 65536 00:21:11.679 }, 00:21:11.679 { 00:21:11.679 "name": "BaseBdev3", 00:21:11.679 "uuid": "3d28f41e-7ad0-4ffc-b3a7-b40c9efe0437", 00:21:11.679 "is_configured": true, 00:21:11.679 "data_offset": 0, 00:21:11.679 "data_size": 65536 00:21:11.679 }, 00:21:11.679 { 00:21:11.679 "name": "BaseBdev4", 00:21:11.679 "uuid": "0ef9c738-41a4-4fb4-b703-37daf4c8c422", 00:21:11.679 "is_configured": true, 00:21:11.679 "data_offset": 0, 00:21:11.679 "data_size": 65536 00:21:11.679 } 00:21:11.679 ] 00:21:11.679 }' 00:21:11.679 16:38:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:11.679 16:38:43 -- common/autotest_common.sh@10 -- # set +x 00:21:12.246 16:38:43 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:12.246 16:38:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:12.246 16:38:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:12.246 16:38:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:12.246 16:38:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:12.246 16:38:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.246 16:38:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.505 16:38:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:12.505 "name": "raid_bdev1", 00:21:12.505 "uuid": "013e22a5-b5bd-4150-af2e-415de188a9f7", 00:21:12.505 "strip_size_kb": 0, 00:21:12.505 "state": "online", 00:21:12.505 "raid_level": "raid1", 00:21:12.505 "superblock": false, 00:21:12.505 "num_base_bdevs": 4, 00:21:12.505 "num_base_bdevs_discovered": 3, 00:21:12.505 "num_base_bdevs_operational": 3, 00:21:12.505 "base_bdevs_list": [ 00:21:12.505 { 00:21:12.505 "name": null, 00:21:12.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.505 "is_configured": false, 00:21:12.505 "data_offset": 0, 00:21:12.505 "data_size": 65536 00:21:12.505 }, 00:21:12.505 { 00:21:12.505 "name": "BaseBdev2", 00:21:12.505 "uuid": "2b36208f-9f52-4ae9-8f46-7c0cc8a74ff1", 00:21:12.505 "is_configured": true, 00:21:12.505 "data_offset": 0, 00:21:12.505 "data_size": 65536 00:21:12.505 }, 00:21:12.505 { 00:21:12.505 "name": "BaseBdev3", 00:21:12.505 "uuid": "3d28f41e-7ad0-4ffc-b3a7-b40c9efe0437", 00:21:12.505 "is_configured": true, 00:21:12.505 "data_offset": 0, 00:21:12.505 "data_size": 65536 00:21:12.505 }, 00:21:12.505 { 00:21:12.505 "name": "BaseBdev4", 00:21:12.505 "uuid": "0ef9c738-41a4-4fb4-b703-37daf4c8c422", 00:21:12.505 "is_configured": true, 00:21:12.505 "data_offset": 0, 00:21:12.505 "data_size": 65536 00:21:12.505 } 00:21:12.505 ] 00:21:12.505 }' 00:21:12.505 16:38:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:12.505 16:38:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:12.505 16:38:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:12.505 16:38:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:12.505 16:38:43 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:12.764 [2024-07-13 16:38:44.100390] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:12.764 [2024-07-13 16:38:44.100462] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:12.764 [2024-07-13 16:38:44.106864] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:21:12.764 [2024-07-13 16:38:44.109431] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:12.764 16:38:44 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:13.700 16:38:45 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.700 16:38:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:13.700 16:38:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:13.700 16:38:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:13.700 16:38:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:13.701 16:38:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.701 16:38:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.959 16:38:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:13.959 "name": "raid_bdev1", 00:21:13.959 "uuid": "013e22a5-b5bd-4150-af2e-415de188a9f7", 00:21:13.959 "strip_size_kb": 0, 00:21:13.960 "state": "online", 00:21:13.960 "raid_level": "raid1", 00:21:13.960 "superblock": false, 00:21:13.960 "num_base_bdevs": 4, 00:21:13.960 "num_base_bdevs_discovered": 4, 00:21:13.960 "num_base_bdevs_operational": 4, 00:21:13.960 "process": { 00:21:13.960 "type": "rebuild", 00:21:13.960 "target": "spare", 00:21:13.960 "progress": { 00:21:13.960 "blocks": 24576, 00:21:13.960 "percent": 37 00:21:13.960 } 00:21:13.960 }, 00:21:13.960 "base_bdevs_list": [ 00:21:13.960 { 00:21:13.960 "name": "spare", 00:21:13.960 "uuid": "d6d13a42-f20f-5efd-8106-897e686afb34", 00:21:13.960 "is_configured": true, 00:21:13.960 "data_offset": 0, 00:21:13.960 "data_size": 65536 00:21:13.960 }, 00:21:13.960 { 00:21:13.960 "name": "BaseBdev2", 00:21:13.960 "uuid": "2b36208f-9f52-4ae9-8f46-7c0cc8a74ff1", 00:21:13.960 "is_configured": true, 00:21:13.960 "data_offset": 0, 00:21:13.960 "data_size": 65536 00:21:13.960 }, 00:21:13.960 { 00:21:13.960 "name": "BaseBdev3", 00:21:13.960 "uuid": "3d28f41e-7ad0-4ffc-b3a7-b40c9efe0437", 00:21:13.960 "is_configured": true, 00:21:13.960 "data_offset": 0, 00:21:13.960 "data_size": 65536 00:21:13.960 }, 00:21:13.960 { 00:21:13.960 "name": "BaseBdev4", 00:21:13.960 "uuid": "0ef9c738-41a4-4fb4-b703-37daf4c8c422", 00:21:13.960 "is_configured": true, 00:21:13.960 "data_offset": 0, 00:21:13.960 "data_size": 65536 00:21:13.960 } 00:21:13.960 ] 00:21:13.960 }' 00:21:13.960 16:38:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:14.219 16:38:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:14.219 16:38:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:14.219 16:38:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:14.219 16:38:45 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:14.219 16:38:45 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:14.219 16:38:45 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:14.219 16:38:45 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:14.219 16:38:45 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:14.478 [2024-07-13 16:38:45.755939] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:14.478 [2024-07-13 16:38:45.822649] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06220 00:21:14.478 16:38:45 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:14.478 16:38:45 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:14.478 16:38:45 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:14.478 16:38:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:14.478 16:38:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:14.478 16:38:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:14.478 16:38:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:14.478 16:38:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.478 16:38:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:14.737 "name": "raid_bdev1", 00:21:14.737 "uuid": "013e22a5-b5bd-4150-af2e-415de188a9f7", 00:21:14.737 "strip_size_kb": 0, 00:21:14.737 "state": "online", 00:21:14.737 "raid_level": "raid1", 00:21:14.737 "superblock": false, 00:21:14.737 "num_base_bdevs": 4, 00:21:14.737 "num_base_bdevs_discovered": 3, 00:21:14.737 "num_base_bdevs_operational": 3, 00:21:14.737 "process": { 00:21:14.737 "type": "rebuild", 00:21:14.737 "target": "spare", 00:21:14.737 "progress": { 00:21:14.737 "blocks": 36864, 00:21:14.737 "percent": 56 00:21:14.737 } 00:21:14.737 }, 00:21:14.737 "base_bdevs_list": [ 00:21:14.737 { 00:21:14.737 "name": "spare", 00:21:14.737 "uuid": "d6d13a42-f20f-5efd-8106-897e686afb34", 00:21:14.737 "is_configured": true, 00:21:14.737 "data_offset": 0, 00:21:14.737 "data_size": 65536 00:21:14.737 }, 00:21:14.737 { 00:21:14.737 "name": null, 00:21:14.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.737 "is_configured": false, 00:21:14.737 "data_offset": 0, 00:21:14.737 "data_size": 65536 00:21:14.737 }, 00:21:14.737 { 00:21:14.737 "name": "BaseBdev3", 00:21:14.737 "uuid": "3d28f41e-7ad0-4ffc-b3a7-b40c9efe0437", 00:21:14.737 "is_configured": true, 00:21:14.737 "data_offset": 0, 00:21:14.737 "data_size": 65536 00:21:14.737 }, 00:21:14.737 { 00:21:14.737 "name": "BaseBdev4", 00:21:14.737 "uuid": "0ef9c738-41a4-4fb4-b703-37daf4c8c422", 00:21:14.737 "is_configured": true, 00:21:14.737 "data_offset": 0, 00:21:14.737 "data_size": 65536 00:21:14.737 } 00:21:14.737 ] 00:21:14.737 }' 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@657 -- # local timeout=462 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.737 16:38:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.995 16:38:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:14.995 "name": "raid_bdev1", 00:21:14.995 "uuid": "013e22a5-b5bd-4150-af2e-415de188a9f7", 00:21:14.995 "strip_size_kb": 0, 00:21:14.995 "state": "online", 00:21:14.995 "raid_level": "raid1", 00:21:14.995 "superblock": false, 00:21:14.995 "num_base_bdevs": 4, 00:21:14.995 "num_base_bdevs_discovered": 3, 00:21:14.995 "num_base_bdevs_operational": 3, 00:21:14.995 "process": { 00:21:14.995 "type": "rebuild", 00:21:14.995 "target": "spare", 00:21:14.995 "progress": { 00:21:14.995 "blocks": 45056, 00:21:14.995 "percent": 68 00:21:14.995 } 00:21:14.995 }, 00:21:14.995 "base_bdevs_list": [ 00:21:14.995 { 00:21:14.995 "name": "spare", 00:21:14.995 "uuid": "d6d13a42-f20f-5efd-8106-897e686afb34", 00:21:14.995 "is_configured": true, 00:21:14.995 "data_offset": 0, 00:21:14.995 "data_size": 65536 00:21:14.995 }, 00:21:14.995 { 00:21:14.995 "name": null, 00:21:14.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.995 "is_configured": false, 00:21:14.995 "data_offset": 0, 00:21:14.995 "data_size": 65536 00:21:14.996 }, 00:21:14.996 { 00:21:14.996 "name": "BaseBdev3", 00:21:14.996 "uuid": "3d28f41e-7ad0-4ffc-b3a7-b40c9efe0437", 00:21:14.996 "is_configured": true, 00:21:14.996 "data_offset": 0, 00:21:14.996 "data_size": 65536 00:21:14.996 }, 00:21:14.996 { 00:21:14.996 "name": "BaseBdev4", 00:21:14.996 "uuid": "0ef9c738-41a4-4fb4-b703-37daf4c8c422", 00:21:14.996 "is_configured": true, 00:21:14.996 "data_offset": 0, 00:21:14.996 "data_size": 65536 00:21:14.996 } 00:21:14.996 ] 00:21:14.996 }' 00:21:14.996 16:38:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:14.996 16:38:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:14.996 16:38:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:14.996 16:38:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:14.996 16:38:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:15.933 [2024-07-13 16:38:47.334653] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:15.933 [2024-07-13 16:38:47.334771] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:15.933 [2024-07-13 16:38:47.334896] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.192 16:38:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:16.192 16:38:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.192 16:38:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:16.192 16:38:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:16.192 16:38:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:16.192 16:38:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:16.192 16:38:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.192 16:38:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.451 16:38:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:16.451 "name": "raid_bdev1", 00:21:16.451 "uuid": "013e22a5-b5bd-4150-af2e-415de188a9f7", 00:21:16.451 "strip_size_kb": 0, 00:21:16.451 "state": "online", 00:21:16.451 "raid_level": "raid1", 00:21:16.451 "superblock": false, 00:21:16.451 "num_base_bdevs": 4, 00:21:16.451 "num_base_bdevs_discovered": 3, 00:21:16.451 "num_base_bdevs_operational": 3, 00:21:16.451 "base_bdevs_list": [ 00:21:16.451 { 00:21:16.451 "name": "spare", 00:21:16.451 "uuid": "d6d13a42-f20f-5efd-8106-897e686afb34", 00:21:16.451 "is_configured": true, 00:21:16.451 "data_offset": 0, 00:21:16.451 "data_size": 65536 00:21:16.451 }, 00:21:16.451 { 00:21:16.451 "name": null, 00:21:16.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.451 "is_configured": false, 00:21:16.451 "data_offset": 0, 00:21:16.451 "data_size": 65536 00:21:16.451 }, 00:21:16.451 { 00:21:16.451 "name": "BaseBdev3", 00:21:16.451 "uuid": "3d28f41e-7ad0-4ffc-b3a7-b40c9efe0437", 00:21:16.451 "is_configured": true, 00:21:16.451 "data_offset": 0, 00:21:16.451 "data_size": 65536 00:21:16.451 }, 00:21:16.451 { 00:21:16.451 "name": "BaseBdev4", 00:21:16.451 "uuid": "0ef9c738-41a4-4fb4-b703-37daf4c8c422", 00:21:16.451 "is_configured": true, 00:21:16.451 "data_offset": 0, 00:21:16.451 "data_size": 65536 00:21:16.451 } 00:21:16.451 ] 00:21:16.451 }' 00:21:16.451 16:38:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:16.451 16:38:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:16.451 16:38:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:16.451 16:38:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:16.451 16:38:47 -- bdev/bdev_raid.sh@660 -- # break 00:21:16.451 16:38:47 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:16.451 16:38:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:16.451 16:38:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:16.451 16:38:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:16.451 16:38:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:16.451 16:38:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.451 16:38:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.712 16:38:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:16.712 "name": "raid_bdev1", 00:21:16.712 "uuid": "013e22a5-b5bd-4150-af2e-415de188a9f7", 00:21:16.712 "strip_size_kb": 0, 00:21:16.712 "state": "online", 00:21:16.712 "raid_level": "raid1", 00:21:16.712 "superblock": false, 00:21:16.712 "num_base_bdevs": 4, 00:21:16.712 "num_base_bdevs_discovered": 3, 00:21:16.712 "num_base_bdevs_operational": 3, 00:21:16.712 "base_bdevs_list": [ 00:21:16.712 { 00:21:16.712 "name": "spare", 00:21:16.712 "uuid": "d6d13a42-f20f-5efd-8106-897e686afb34", 00:21:16.712 "is_configured": true, 00:21:16.712 "data_offset": 0, 00:21:16.712 "data_size": 65536 00:21:16.712 }, 00:21:16.712 { 00:21:16.712 "name": null, 00:21:16.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.712 "is_configured": false, 00:21:16.712 "data_offset": 0, 00:21:16.712 "data_size": 65536 00:21:16.712 }, 00:21:16.712 { 00:21:16.712 "name": "BaseBdev3", 00:21:16.712 "uuid": "3d28f41e-7ad0-4ffc-b3a7-b40c9efe0437", 00:21:16.712 "is_configured": true, 00:21:16.712 "data_offset": 0, 00:21:16.712 "data_size": 65536 00:21:16.712 }, 00:21:16.712 { 00:21:16.712 "name": "BaseBdev4", 00:21:16.712 "uuid": "0ef9c738-41a4-4fb4-b703-37daf4c8c422", 00:21:16.712 "is_configured": true, 00:21:16.712 "data_offset": 0, 00:21:16.712 "data_size": 65536 00:21:16.712 } 00:21:16.712 ] 00:21:16.712 }' 00:21:16.712 16:38:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:16.712 16:38:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:16.712 16:38:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:16.974 16:38:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:16.974 16:38:48 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:16.974 16:38:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:16.974 16:38:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:16.974 16:38:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:16.974 16:38:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:16.974 16:38:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:16.974 16:38:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:16.974 16:38:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:16.974 16:38:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:16.974 16:38:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:16.974 16:38:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.974 16:38:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.236 16:38:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:17.236 "name": "raid_bdev1", 00:21:17.236 "uuid": "013e22a5-b5bd-4150-af2e-415de188a9f7", 00:21:17.236 "strip_size_kb": 0, 00:21:17.236 "state": "online", 00:21:17.236 "raid_level": "raid1", 00:21:17.236 "superblock": false, 00:21:17.236 "num_base_bdevs": 4, 00:21:17.236 "num_base_bdevs_discovered": 3, 00:21:17.236 "num_base_bdevs_operational": 3, 00:21:17.236 "base_bdevs_list": [ 00:21:17.236 { 00:21:17.236 "name": "spare", 00:21:17.236 "uuid": "d6d13a42-f20f-5efd-8106-897e686afb34", 00:21:17.236 "is_configured": true, 00:21:17.236 "data_offset": 0, 00:21:17.236 "data_size": 65536 00:21:17.236 }, 00:21:17.236 { 00:21:17.236 "name": null, 00:21:17.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.236 "is_configured": false, 00:21:17.236 "data_offset": 0, 00:21:17.236 "data_size": 65536 00:21:17.236 }, 00:21:17.236 { 00:21:17.236 "name": "BaseBdev3", 00:21:17.236 "uuid": "3d28f41e-7ad0-4ffc-b3a7-b40c9efe0437", 00:21:17.236 "is_configured": true, 00:21:17.236 "data_offset": 0, 00:21:17.236 "data_size": 65536 00:21:17.236 }, 00:21:17.236 { 00:21:17.236 "name": "BaseBdev4", 00:21:17.236 "uuid": "0ef9c738-41a4-4fb4-b703-37daf4c8c422", 00:21:17.236 "is_configured": true, 00:21:17.236 "data_offset": 0, 00:21:17.236 "data_size": 65536 00:21:17.236 } 00:21:17.236 ] 00:21:17.236 }' 00:21:17.236 16:38:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:17.236 16:38:48 -- common/autotest_common.sh@10 -- # set +x 00:21:17.805 16:38:49 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:18.064 [2024-07-13 16:38:49.282731] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:18.064 [2024-07-13 16:38:49.282791] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:18.064 [2024-07-13 16:38:49.282930] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:18.064 [2024-07-13 16:38:49.283033] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:18.064 [2024-07-13 16:38:49.283062] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:21:18.064 16:38:49 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:18.064 16:38:49 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.324 16:38:49 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:18.324 16:38:49 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:18.324 16:38:49 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:18.324 16:38:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:18.324 16:38:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:18.324 16:38:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:18.324 16:38:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:18.324 16:38:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:18.324 16:38:49 -- bdev/nbd_common.sh@12 -- # local i 00:21:18.324 16:38:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:18.324 16:38:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:18.324 16:38:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:18.583 /dev/nbd0 00:21:18.583 16:38:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:18.583 16:38:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:18.583 16:38:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:18.583 16:38:49 -- common/autotest_common.sh@857 -- # local i 00:21:18.583 16:38:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:18.583 16:38:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:18.583 16:38:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:18.583 16:38:49 -- common/autotest_common.sh@861 -- # break 00:21:18.583 16:38:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:18.583 16:38:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:18.583 16:38:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:18.583 1+0 records in 00:21:18.583 1+0 records out 00:21:18.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000772398 s, 5.3 MB/s 00:21:18.583 16:38:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.583 16:38:49 -- common/autotest_common.sh@874 -- # size=4096 00:21:18.583 16:38:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.583 16:38:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:18.583 16:38:49 -- common/autotest_common.sh@877 -- # return 0 00:21:18.583 16:38:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:18.583 16:38:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:18.583 16:38:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:18.842 /dev/nbd1 00:21:18.842 16:38:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:18.842 16:38:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:18.842 16:38:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:18.842 16:38:50 -- common/autotest_common.sh@857 -- # local i 00:21:18.842 16:38:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:18.842 16:38:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:18.842 16:38:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:18.842 16:38:50 -- common/autotest_common.sh@861 -- # break 00:21:18.842 16:38:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:18.842 16:38:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:18.842 16:38:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:18.842 1+0 records in 00:21:18.842 1+0 records out 00:21:18.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000847614 s, 4.8 MB/s 00:21:18.842 16:38:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.842 16:38:50 -- common/autotest_common.sh@874 -- # size=4096 00:21:18.842 16:38:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.842 16:38:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:18.842 16:38:50 -- common/autotest_common.sh@877 -- # return 0 00:21:18.842 16:38:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:18.842 16:38:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:18.842 16:38:50 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:18.842 16:38:50 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:18.842 16:38:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:18.842 16:38:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:18.842 16:38:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:18.842 16:38:50 -- bdev/nbd_common.sh@51 -- # local i 00:21:18.842 16:38:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:18.842 16:38:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:19.101 16:38:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@41 -- # break 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@45 -- # return 0 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@41 -- # break 00:21:19.360 16:38:50 -- bdev/nbd_common.sh@45 -- # return 0 00:21:19.360 16:38:50 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:19.360 16:38:50 -- bdev/bdev_raid.sh@709 -- # killprocess 135179 00:21:19.360 16:38:50 -- common/autotest_common.sh@926 -- # '[' -z 135179 ']' 00:21:19.360 16:38:50 -- common/autotest_common.sh@930 -- # kill -0 135179 00:21:19.360 16:38:50 -- common/autotest_common.sh@931 -- # uname 00:21:19.360 16:38:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:19.360 16:38:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135179 00:21:19.619 16:38:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:19.619 16:38:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:19.619 16:38:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135179' 00:21:19.619 killing process with pid 135179 00:21:19.619 16:38:50 -- common/autotest_common.sh@945 -- # kill 135179 00:21:19.619 Received shutdown signal, test time was about 60.000000 seconds 00:21:19.619 00:21:19.619 Latency(us) 00:21:19.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.619 =================================================================================================================== 00:21:19.619 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:19.619 [2024-07-13 16:38:50.836177] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:19.619 16:38:50 -- common/autotest_common.sh@950 -- # wait 135179 00:21:19.619 [2024-07-13 16:38:50.935323] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:20.187 00:21:20.187 real 0m22.541s 00:21:20.187 user 0m30.415s 00:21:20.187 sys 0m5.442s 00:21:20.187 16:38:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:20.187 16:38:51 -- common/autotest_common.sh@10 -- # set +x 00:21:20.187 ************************************ 00:21:20.187 END TEST raid_rebuild_test 00:21:20.187 ************************************ 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:21:20.187 16:38:51 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:20.187 16:38:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:20.187 16:38:51 -- common/autotest_common.sh@10 -- # set +x 00:21:20.187 ************************************ 00:21:20.187 START TEST raid_rebuild_test_sb 00:21:20.187 ************************************ 00:21:20.187 16:38:51 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@544 -- # raid_pid=135724 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135724 /var/tmp/spdk-raid.sock 00:21:20.187 16:38:51 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:20.187 16:38:51 -- common/autotest_common.sh@819 -- # '[' -z 135724 ']' 00:21:20.187 16:38:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:20.187 16:38:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:20.187 16:38:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:20.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:20.187 16:38:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:20.187 16:38:51 -- common/autotest_common.sh@10 -- # set +x 00:21:20.187 [2024-07-13 16:38:51.526497] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:20.187 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:20.187 Zero copy mechanism will not be used. 00:21:20.187 [2024-07-13 16:38:51.526711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135724 ] 00:21:20.446 [2024-07-13 16:38:51.670995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.446 [2024-07-13 16:38:51.755789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.446 [2024-07-13 16:38:51.837357] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.014 16:38:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:21.014 16:38:52 -- common/autotest_common.sh@852 -- # return 0 00:21:21.014 16:38:52 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:21.014 16:38:52 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:21.014 16:38:52 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:21.273 BaseBdev1_malloc 00:21:21.273 16:38:52 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:21.836 [2024-07-13 16:38:53.020736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:21.836 [2024-07-13 16:38:53.020903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.836 [2024-07-13 16:38:53.020950] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:21:21.836 [2024-07-13 16:38:53.021008] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.836 [2024-07-13 16:38:53.024238] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.836 [2024-07-13 16:38:53.024349] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:21.836 BaseBdev1 00:21:21.836 16:38:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:21.836 16:38:53 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:21.836 16:38:53 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:22.093 BaseBdev2_malloc 00:21:22.093 16:38:53 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:22.093 [2024-07-13 16:38:53.509200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:22.093 [2024-07-13 16:38:53.509342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.093 [2024-07-13 16:38:53.509389] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:21:22.093 [2024-07-13 16:38:53.509442] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.093 [2024-07-13 16:38:53.512303] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.093 [2024-07-13 16:38:53.512356] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:22.093 BaseBdev2 00:21:22.093 16:38:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:22.093 16:38:53 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:22.093 16:38:53 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:22.350 BaseBdev3_malloc 00:21:22.350 16:38:53 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:22.609 [2024-07-13 16:38:53.948719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:22.609 [2024-07-13 16:38:53.948849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.609 [2024-07-13 16:38:53.948901] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:22.609 [2024-07-13 16:38:53.948964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.609 [2024-07-13 16:38:53.951941] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.609 [2024-07-13 16:38:53.952017] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:22.609 BaseBdev3 00:21:22.609 16:38:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:22.609 16:38:53 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:22.609 16:38:53 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:22.867 BaseBdev4_malloc 00:21:22.867 16:38:54 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:23.125 [2024-07-13 16:38:54.445519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:23.125 [2024-07-13 16:38:54.445974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.125 [2024-07-13 16:38:54.446061] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:23.125 [2024-07-13 16:38:54.446294] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.125 [2024-07-13 16:38:54.449276] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.125 [2024-07-13 16:38:54.449536] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:23.125 BaseBdev4 00:21:23.125 16:38:54 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:23.383 spare_malloc 00:21:23.383 16:38:54 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:23.641 spare_delay 00:21:23.641 16:38:54 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:23.900 [2024-07-13 16:38:55.154360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:23.900 [2024-07-13 16:38:55.154751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.900 [2024-07-13 16:38:55.154896] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:23.900 [2024-07-13 16:38:55.155053] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.900 [2024-07-13 16:38:55.158251] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.900 [2024-07-13 16:38:55.158454] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:23.900 spare 00:21:23.900 16:38:55 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:23.900 [2024-07-13 16:38:55.367012] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:23.900 [2024-07-13 16:38:55.370031] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:23.900 [2024-07-13 16:38:55.370315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:23.900 [2024-07-13 16:38:55.370404] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:23.900 [2024-07-13 16:38:55.370763] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:21:23.900 [2024-07-13 16:38:55.370882] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:24.158 [2024-07-13 16:38:55.371129] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:21:24.158 [2024-07-13 16:38:55.371688] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:21:24.158 [2024-07-13 16:38:55.371798] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:21:24.158 [2024-07-13 16:38:55.372135] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.158 16:38:55 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:24.158 16:38:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:24.158 16:38:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:24.158 16:38:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:24.158 16:38:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:24.158 16:38:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:24.158 16:38:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:24.158 16:38:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:24.158 16:38:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:24.158 16:38:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:24.158 16:38:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.158 16:38:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.416 16:38:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:24.416 "name": "raid_bdev1", 00:21:24.416 "uuid": "46b53b54-79c6-4a7a-9e65-e8628281410f", 00:21:24.416 "strip_size_kb": 0, 00:21:24.416 "state": "online", 00:21:24.416 "raid_level": "raid1", 00:21:24.416 "superblock": true, 00:21:24.416 "num_base_bdevs": 4, 00:21:24.416 "num_base_bdevs_discovered": 4, 00:21:24.416 "num_base_bdevs_operational": 4, 00:21:24.416 "base_bdevs_list": [ 00:21:24.416 { 00:21:24.416 "name": "BaseBdev1", 00:21:24.416 "uuid": "695b49cf-ce68-554a-ad7b-4ca27cd5b7fe", 00:21:24.416 "is_configured": true, 00:21:24.416 "data_offset": 2048, 00:21:24.416 "data_size": 63488 00:21:24.416 }, 00:21:24.416 { 00:21:24.416 "name": "BaseBdev2", 00:21:24.416 "uuid": "e17366e4-d413-58b7-add6-03905225af8e", 00:21:24.416 "is_configured": true, 00:21:24.416 "data_offset": 2048, 00:21:24.416 "data_size": 63488 00:21:24.416 }, 00:21:24.416 { 00:21:24.416 "name": "BaseBdev3", 00:21:24.416 "uuid": "57696702-378b-50bf-b2db-d4f3d419654e", 00:21:24.416 "is_configured": true, 00:21:24.416 "data_offset": 2048, 00:21:24.416 "data_size": 63488 00:21:24.416 }, 00:21:24.416 { 00:21:24.416 "name": "BaseBdev4", 00:21:24.416 "uuid": "1c189a7f-3db4-5237-bec5-3965cd3a79c4", 00:21:24.416 "is_configured": true, 00:21:24.416 "data_offset": 2048, 00:21:24.416 "data_size": 63488 00:21:24.416 } 00:21:24.416 ] 00:21:24.416 }' 00:21:24.416 16:38:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:24.416 16:38:55 -- common/autotest_common.sh@10 -- # set +x 00:21:24.981 16:38:56 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:24.981 16:38:56 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:25.239 [2024-07-13 16:38:56.491416] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:25.239 16:38:56 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:25.239 16:38:56 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.239 16:38:56 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:25.497 16:38:56 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:25.497 16:38:56 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:25.497 16:38:56 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:25.497 16:38:56 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:25.497 16:38:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:25.497 16:38:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:25.497 16:38:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:25.497 16:38:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:25.497 16:38:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:25.497 16:38:56 -- bdev/nbd_common.sh@12 -- # local i 00:21:25.497 16:38:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:25.497 16:38:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:25.497 16:38:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:25.756 [2024-07-13 16:38:56.991311] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:21:25.756 /dev/nbd0 00:21:25.756 16:38:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:25.756 16:38:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:25.756 16:38:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:25.756 16:38:57 -- common/autotest_common.sh@857 -- # local i 00:21:25.756 16:38:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:25.756 16:38:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:25.756 16:38:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:25.756 16:38:57 -- common/autotest_common.sh@861 -- # break 00:21:25.756 16:38:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:25.756 16:38:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:25.756 16:38:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:25.756 1+0 records in 00:21:25.756 1+0 records out 00:21:25.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610018 s, 6.7 MB/s 00:21:25.756 16:38:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.756 16:38:57 -- common/autotest_common.sh@874 -- # size=4096 00:21:25.756 16:38:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.756 16:38:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:25.756 16:38:57 -- common/autotest_common.sh@877 -- # return 0 00:21:25.756 16:38:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:25.756 16:38:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:25.756 16:38:57 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:25.756 16:38:57 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:25.756 16:38:57 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:32.343 63488+0 records in 00:21:32.343 63488+0 records out 00:21:32.343 32505856 bytes (33 MB, 31 MiB) copied, 5.5502 s, 5.9 MB/s 00:21:32.343 16:39:02 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@51 -- # local i 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:32.343 [2024-07-13 16:39:02.843625] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@41 -- # break 00:21:32.343 16:39:02 -- bdev/nbd_common.sh@45 -- # return 0 00:21:32.343 16:39:02 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:32.343 [2024-07-13 16:39:03.039318] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:32.343 16:39:03 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:32.343 16:39:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:32.343 16:39:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:32.343 16:39:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:32.343 16:39:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:32.343 16:39:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:32.344 16:39:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:32.344 16:39:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:32.344 16:39:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:32.344 16:39:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:32.344 16:39:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.344 16:39:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.344 16:39:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:32.344 "name": "raid_bdev1", 00:21:32.344 "uuid": "46b53b54-79c6-4a7a-9e65-e8628281410f", 00:21:32.344 "strip_size_kb": 0, 00:21:32.344 "state": "online", 00:21:32.344 "raid_level": "raid1", 00:21:32.344 "superblock": true, 00:21:32.344 "num_base_bdevs": 4, 00:21:32.344 "num_base_bdevs_discovered": 3, 00:21:32.344 "num_base_bdevs_operational": 3, 00:21:32.344 "base_bdevs_list": [ 00:21:32.344 { 00:21:32.344 "name": null, 00:21:32.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.344 "is_configured": false, 00:21:32.344 "data_offset": 2048, 00:21:32.344 "data_size": 63488 00:21:32.344 }, 00:21:32.344 { 00:21:32.344 "name": "BaseBdev2", 00:21:32.344 "uuid": "e17366e4-d413-58b7-add6-03905225af8e", 00:21:32.344 "is_configured": true, 00:21:32.344 "data_offset": 2048, 00:21:32.344 "data_size": 63488 00:21:32.344 }, 00:21:32.344 { 00:21:32.344 "name": "BaseBdev3", 00:21:32.344 "uuid": "57696702-378b-50bf-b2db-d4f3d419654e", 00:21:32.344 "is_configured": true, 00:21:32.344 "data_offset": 2048, 00:21:32.344 "data_size": 63488 00:21:32.344 }, 00:21:32.344 { 00:21:32.344 "name": "BaseBdev4", 00:21:32.344 "uuid": "1c189a7f-3db4-5237-bec5-3965cd3a79c4", 00:21:32.344 "is_configured": true, 00:21:32.344 "data_offset": 2048, 00:21:32.344 "data_size": 63488 00:21:32.344 } 00:21:32.344 ] 00:21:32.344 }' 00:21:32.344 16:39:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:32.344 16:39:03 -- common/autotest_common.sh@10 -- # set +x 00:21:32.602 16:39:03 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:32.860 [2024-07-13 16:39:04.175533] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:32.860 [2024-07-13 16:39:04.175905] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:32.860 [2024-07-13 16:39:04.182342] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:21:32.860 [2024-07-13 16:39:04.185166] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:32.860 16:39:04 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:33.791 16:39:05 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.791 16:39:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:33.791 16:39:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:33.791 16:39:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:33.791 16:39:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:33.791 16:39:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.791 16:39:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.048 16:39:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:34.048 "name": "raid_bdev1", 00:21:34.048 "uuid": "46b53b54-79c6-4a7a-9e65-e8628281410f", 00:21:34.048 "strip_size_kb": 0, 00:21:34.048 "state": "online", 00:21:34.048 "raid_level": "raid1", 00:21:34.048 "superblock": true, 00:21:34.048 "num_base_bdevs": 4, 00:21:34.048 "num_base_bdevs_discovered": 4, 00:21:34.048 "num_base_bdevs_operational": 4, 00:21:34.048 "process": { 00:21:34.048 "type": "rebuild", 00:21:34.048 "target": "spare", 00:21:34.048 "progress": { 00:21:34.048 "blocks": 24576, 00:21:34.048 "percent": 38 00:21:34.048 } 00:21:34.048 }, 00:21:34.048 "base_bdevs_list": [ 00:21:34.048 { 00:21:34.048 "name": "spare", 00:21:34.048 "uuid": "39253975-c9dd-5120-b69e-a97ccd74d61b", 00:21:34.048 "is_configured": true, 00:21:34.048 "data_offset": 2048, 00:21:34.048 "data_size": 63488 00:21:34.048 }, 00:21:34.048 { 00:21:34.048 "name": "BaseBdev2", 00:21:34.048 "uuid": "e17366e4-d413-58b7-add6-03905225af8e", 00:21:34.048 "is_configured": true, 00:21:34.048 "data_offset": 2048, 00:21:34.048 "data_size": 63488 00:21:34.048 }, 00:21:34.048 { 00:21:34.048 "name": "BaseBdev3", 00:21:34.048 "uuid": "57696702-378b-50bf-b2db-d4f3d419654e", 00:21:34.048 "is_configured": true, 00:21:34.048 "data_offset": 2048, 00:21:34.048 "data_size": 63488 00:21:34.048 }, 00:21:34.048 { 00:21:34.048 "name": "BaseBdev4", 00:21:34.048 "uuid": "1c189a7f-3db4-5237-bec5-3965cd3a79c4", 00:21:34.048 "is_configured": true, 00:21:34.048 "data_offset": 2048, 00:21:34.048 "data_size": 63488 00:21:34.048 } 00:21:34.048 ] 00:21:34.048 }' 00:21:34.048 16:39:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:34.306 16:39:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:34.306 16:39:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:34.306 16:39:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:34.306 16:39:05 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:34.564 [2024-07-13 16:39:05.798928] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:34.564 [2024-07-13 16:39:05.799595] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:34.564 [2024-07-13 16:39:05.799808] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.564 16:39:05 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:34.564 16:39:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:34.564 16:39:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:34.564 16:39:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:34.564 16:39:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:34.564 16:39:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:34.564 16:39:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:34.564 16:39:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:34.565 16:39:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:34.565 16:39:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:34.565 16:39:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.565 16:39:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.823 16:39:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:34.823 "name": "raid_bdev1", 00:21:34.823 "uuid": "46b53b54-79c6-4a7a-9e65-e8628281410f", 00:21:34.823 "strip_size_kb": 0, 00:21:34.823 "state": "online", 00:21:34.823 "raid_level": "raid1", 00:21:34.823 "superblock": true, 00:21:34.823 "num_base_bdevs": 4, 00:21:34.823 "num_base_bdevs_discovered": 3, 00:21:34.823 "num_base_bdevs_operational": 3, 00:21:34.823 "base_bdevs_list": [ 00:21:34.823 { 00:21:34.823 "name": null, 00:21:34.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.823 "is_configured": false, 00:21:34.823 "data_offset": 2048, 00:21:34.823 "data_size": 63488 00:21:34.823 }, 00:21:34.823 { 00:21:34.823 "name": "BaseBdev2", 00:21:34.823 "uuid": "e17366e4-d413-58b7-add6-03905225af8e", 00:21:34.823 "is_configured": true, 00:21:34.823 "data_offset": 2048, 00:21:34.823 "data_size": 63488 00:21:34.823 }, 00:21:34.823 { 00:21:34.823 "name": "BaseBdev3", 00:21:34.823 "uuid": "57696702-378b-50bf-b2db-d4f3d419654e", 00:21:34.823 "is_configured": true, 00:21:34.823 "data_offset": 2048, 00:21:34.823 "data_size": 63488 00:21:34.823 }, 00:21:34.823 { 00:21:34.823 "name": "BaseBdev4", 00:21:34.823 "uuid": "1c189a7f-3db4-5237-bec5-3965cd3a79c4", 00:21:34.823 "is_configured": true, 00:21:34.823 "data_offset": 2048, 00:21:34.823 "data_size": 63488 00:21:34.823 } 00:21:34.823 ] 00:21:34.823 }' 00:21:34.823 16:39:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:34.823 16:39:06 -- common/autotest_common.sh@10 -- # set +x 00:21:35.389 16:39:06 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:35.389 16:39:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:35.389 16:39:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:35.389 16:39:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:35.389 16:39:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:35.389 16:39:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.389 16:39:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.648 16:39:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:35.648 "name": "raid_bdev1", 00:21:35.648 "uuid": "46b53b54-79c6-4a7a-9e65-e8628281410f", 00:21:35.648 "strip_size_kb": 0, 00:21:35.648 "state": "online", 00:21:35.648 "raid_level": "raid1", 00:21:35.648 "superblock": true, 00:21:35.648 "num_base_bdevs": 4, 00:21:35.648 "num_base_bdevs_discovered": 3, 00:21:35.648 "num_base_bdevs_operational": 3, 00:21:35.648 "base_bdevs_list": [ 00:21:35.648 { 00:21:35.648 "name": null, 00:21:35.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.648 "is_configured": false, 00:21:35.648 "data_offset": 2048, 00:21:35.648 "data_size": 63488 00:21:35.648 }, 00:21:35.648 { 00:21:35.648 "name": "BaseBdev2", 00:21:35.648 "uuid": "e17366e4-d413-58b7-add6-03905225af8e", 00:21:35.648 "is_configured": true, 00:21:35.648 "data_offset": 2048, 00:21:35.648 "data_size": 63488 00:21:35.648 }, 00:21:35.648 { 00:21:35.648 "name": "BaseBdev3", 00:21:35.648 "uuid": "57696702-378b-50bf-b2db-d4f3d419654e", 00:21:35.648 "is_configured": true, 00:21:35.648 "data_offset": 2048, 00:21:35.648 "data_size": 63488 00:21:35.648 }, 00:21:35.648 { 00:21:35.648 "name": "BaseBdev4", 00:21:35.648 "uuid": "1c189a7f-3db4-5237-bec5-3965cd3a79c4", 00:21:35.648 "is_configured": true, 00:21:35.648 "data_offset": 2048, 00:21:35.648 "data_size": 63488 00:21:35.648 } 00:21:35.648 ] 00:21:35.648 }' 00:21:35.648 16:39:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:35.648 16:39:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:35.648 16:39:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:35.648 16:39:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:35.648 16:39:07 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:35.906 [2024-07-13 16:39:07.207873] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:35.906 [2024-07-13 16:39:07.208255] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:35.906 [2024-07-13 16:39:07.214715] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e5c0 00:21:35.906 [2024-07-13 16:39:07.217593] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:35.906 16:39:07 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:36.841 16:39:08 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:36.841 16:39:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:36.841 16:39:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:36.841 16:39:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:36.841 16:39:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:36.841 16:39:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.841 16:39:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.098 16:39:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:37.098 "name": "raid_bdev1", 00:21:37.098 "uuid": "46b53b54-79c6-4a7a-9e65-e8628281410f", 00:21:37.098 "strip_size_kb": 0, 00:21:37.098 "state": "online", 00:21:37.098 "raid_level": "raid1", 00:21:37.098 "superblock": true, 00:21:37.098 "num_base_bdevs": 4, 00:21:37.098 "num_base_bdevs_discovered": 4, 00:21:37.098 "num_base_bdevs_operational": 4, 00:21:37.098 "process": { 00:21:37.098 "type": "rebuild", 00:21:37.098 "target": "spare", 00:21:37.098 "progress": { 00:21:37.098 "blocks": 24576, 00:21:37.098 "percent": 38 00:21:37.098 } 00:21:37.098 }, 00:21:37.098 "base_bdevs_list": [ 00:21:37.098 { 00:21:37.098 "name": "spare", 00:21:37.098 "uuid": "39253975-c9dd-5120-b69e-a97ccd74d61b", 00:21:37.098 "is_configured": true, 00:21:37.098 "data_offset": 2048, 00:21:37.098 "data_size": 63488 00:21:37.098 }, 00:21:37.098 { 00:21:37.098 "name": "BaseBdev2", 00:21:37.098 "uuid": "e17366e4-d413-58b7-add6-03905225af8e", 00:21:37.099 "is_configured": true, 00:21:37.099 "data_offset": 2048, 00:21:37.099 "data_size": 63488 00:21:37.099 }, 00:21:37.099 { 00:21:37.099 "name": "BaseBdev3", 00:21:37.099 "uuid": "57696702-378b-50bf-b2db-d4f3d419654e", 00:21:37.099 "is_configured": true, 00:21:37.099 "data_offset": 2048, 00:21:37.099 "data_size": 63488 00:21:37.099 }, 00:21:37.099 { 00:21:37.099 "name": "BaseBdev4", 00:21:37.099 "uuid": "1c189a7f-3db4-5237-bec5-3965cd3a79c4", 00:21:37.099 "is_configured": true, 00:21:37.099 "data_offset": 2048, 00:21:37.099 "data_size": 63488 00:21:37.099 } 00:21:37.099 ] 00:21:37.099 }' 00:21:37.099 16:39:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:37.099 16:39:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:37.099 16:39:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:37.356 16:39:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:37.356 16:39:08 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:37.356 16:39:08 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:37.357 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:37.357 16:39:08 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:37.357 16:39:08 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:37.357 16:39:08 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:37.357 16:39:08 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:37.357 [2024-07-13 16:39:08.795428] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:37.615 [2024-07-13 16:39:08.830224] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e5c0 00:21:37.615 16:39:08 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:37.615 16:39:08 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:37.615 16:39:08 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.615 16:39:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:37.615 16:39:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:37.615 16:39:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:37.615 16:39:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:37.615 16:39:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.615 16:39:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.872 16:39:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:37.872 "name": "raid_bdev1", 00:21:37.872 "uuid": "46b53b54-79c6-4a7a-9e65-e8628281410f", 00:21:37.872 "strip_size_kb": 0, 00:21:37.872 "state": "online", 00:21:37.872 "raid_level": "raid1", 00:21:37.872 "superblock": true, 00:21:37.872 "num_base_bdevs": 4, 00:21:37.872 "num_base_bdevs_discovered": 3, 00:21:37.872 "num_base_bdevs_operational": 3, 00:21:37.872 "process": { 00:21:37.872 "type": "rebuild", 00:21:37.872 "target": "spare", 00:21:37.872 "progress": { 00:21:37.872 "blocks": 38912, 00:21:37.872 "percent": 61 00:21:37.872 } 00:21:37.872 }, 00:21:37.872 "base_bdevs_list": [ 00:21:37.872 { 00:21:37.872 "name": "spare", 00:21:37.872 "uuid": "39253975-c9dd-5120-b69e-a97ccd74d61b", 00:21:37.872 "is_configured": true, 00:21:37.872 "data_offset": 2048, 00:21:37.872 "data_size": 63488 00:21:37.872 }, 00:21:37.872 { 00:21:37.872 "name": null, 00:21:37.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.872 "is_configured": false, 00:21:37.872 "data_offset": 2048, 00:21:37.872 "data_size": 63488 00:21:37.872 }, 00:21:37.872 { 00:21:37.872 "name": "BaseBdev3", 00:21:37.872 "uuid": "57696702-378b-50bf-b2db-d4f3d419654e", 00:21:37.872 "is_configured": true, 00:21:37.872 "data_offset": 2048, 00:21:37.872 "data_size": 63488 00:21:37.872 }, 00:21:37.872 { 00:21:37.872 "name": "BaseBdev4", 00:21:37.872 "uuid": "1c189a7f-3db4-5237-bec5-3965cd3a79c4", 00:21:37.872 "is_configured": true, 00:21:37.872 "data_offset": 2048, 00:21:37.872 "data_size": 63488 00:21:37.872 } 00:21:37.872 ] 00:21:37.872 }' 00:21:37.872 16:39:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:37.872 16:39:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:37.872 16:39:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:37.872 16:39:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:37.873 16:39:09 -- bdev/bdev_raid.sh@657 -- # local timeout=485 00:21:37.873 16:39:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:37.873 16:39:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.873 16:39:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:37.873 16:39:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:37.873 16:39:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:37.873 16:39:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:37.873 16:39:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.873 16:39:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.129 16:39:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:38.129 "name": "raid_bdev1", 00:21:38.129 "uuid": "46b53b54-79c6-4a7a-9e65-e8628281410f", 00:21:38.129 "strip_size_kb": 0, 00:21:38.129 "state": "online", 00:21:38.129 "raid_level": "raid1", 00:21:38.129 "superblock": true, 00:21:38.129 "num_base_bdevs": 4, 00:21:38.129 "num_base_bdevs_discovered": 3, 00:21:38.129 "num_base_bdevs_operational": 3, 00:21:38.129 "process": { 00:21:38.129 "type": "rebuild", 00:21:38.129 "target": "spare", 00:21:38.129 "progress": { 00:21:38.129 "blocks": 45056, 00:21:38.129 "percent": 70 00:21:38.129 } 00:21:38.129 }, 00:21:38.129 "base_bdevs_list": [ 00:21:38.129 { 00:21:38.129 "name": "spare", 00:21:38.129 "uuid": "39253975-c9dd-5120-b69e-a97ccd74d61b", 00:21:38.129 "is_configured": true, 00:21:38.129 "data_offset": 2048, 00:21:38.129 "data_size": 63488 00:21:38.129 }, 00:21:38.129 { 00:21:38.129 "name": null, 00:21:38.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.129 "is_configured": false, 00:21:38.129 "data_offset": 2048, 00:21:38.129 "data_size": 63488 00:21:38.129 }, 00:21:38.129 { 00:21:38.129 "name": "BaseBdev3", 00:21:38.130 "uuid": "57696702-378b-50bf-b2db-d4f3d419654e", 00:21:38.130 "is_configured": true, 00:21:38.130 "data_offset": 2048, 00:21:38.130 "data_size": 63488 00:21:38.130 }, 00:21:38.130 { 00:21:38.130 "name": "BaseBdev4", 00:21:38.130 "uuid": "1c189a7f-3db4-5237-bec5-3965cd3a79c4", 00:21:38.130 "is_configured": true, 00:21:38.130 "data_offset": 2048, 00:21:38.130 "data_size": 63488 00:21:38.130 } 00:21:38.130 ] 00:21:38.130 }' 00:21:38.130 16:39:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:38.130 16:39:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:38.130 16:39:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:38.387 16:39:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:38.387 16:39:09 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:38.952 [2024-07-13 16:39:10.342911] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:38.952 [2024-07-13 16:39:10.343382] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:38.952 [2024-07-13 16:39:10.343736] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.210 16:39:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:39.210 16:39:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:39.210 16:39:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:39.210 16:39:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:39.210 16:39:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:39.210 16:39:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:39.210 16:39:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.210 16:39:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.467 16:39:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:39.467 "name": "raid_bdev1", 00:21:39.467 "uuid": "46b53b54-79c6-4a7a-9e65-e8628281410f", 00:21:39.467 "strip_size_kb": 0, 00:21:39.467 "state": "online", 00:21:39.467 "raid_level": "raid1", 00:21:39.467 "superblock": true, 00:21:39.467 "num_base_bdevs": 4, 00:21:39.467 "num_base_bdevs_discovered": 3, 00:21:39.467 "num_base_bdevs_operational": 3, 00:21:39.467 "base_bdevs_list": [ 00:21:39.467 { 00:21:39.467 "name": "spare", 00:21:39.467 "uuid": "39253975-c9dd-5120-b69e-a97ccd74d61b", 00:21:39.467 "is_configured": true, 00:21:39.467 "data_offset": 2048, 00:21:39.467 "data_size": 63488 00:21:39.467 }, 00:21:39.467 { 00:21:39.467 "name": null, 00:21:39.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.467 "is_configured": false, 00:21:39.467 "data_offset": 2048, 00:21:39.467 "data_size": 63488 00:21:39.467 }, 00:21:39.467 { 00:21:39.467 "name": "BaseBdev3", 00:21:39.467 "uuid": "57696702-378b-50bf-b2db-d4f3d419654e", 00:21:39.467 "is_configured": true, 00:21:39.467 "data_offset": 2048, 00:21:39.467 "data_size": 63488 00:21:39.467 }, 00:21:39.467 { 00:21:39.467 "name": "BaseBdev4", 00:21:39.467 "uuid": "1c189a7f-3db4-5237-bec5-3965cd3a79c4", 00:21:39.467 "is_configured": true, 00:21:39.467 "data_offset": 2048, 00:21:39.467 "data_size": 63488 00:21:39.467 } 00:21:39.467 ] 00:21:39.467 }' 00:21:39.467 16:39:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:39.467 16:39:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:39.467 16:39:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:39.467 16:39:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:39.467 16:39:10 -- bdev/bdev_raid.sh@660 -- # break 00:21:39.467 16:39:10 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:39.467 16:39:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:39.467 16:39:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:39.467 16:39:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:39.467 16:39:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:39.726 16:39:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.726 16:39:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.726 16:39:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:39.726 "name": "raid_bdev1", 00:21:39.726 "uuid": "46b53b54-79c6-4a7a-9e65-e8628281410f", 00:21:39.726 "strip_size_kb": 0, 00:21:39.726 "state": "online", 00:21:39.726 "raid_level": "raid1", 00:21:39.726 "superblock": true, 00:21:39.726 "num_base_bdevs": 4, 00:21:39.726 "num_base_bdevs_discovered": 3, 00:21:39.726 "num_base_bdevs_operational": 3, 00:21:39.726 "base_bdevs_list": [ 00:21:39.726 { 00:21:39.726 "name": "spare", 00:21:39.726 "uuid": "39253975-c9dd-5120-b69e-a97ccd74d61b", 00:21:39.726 "is_configured": true, 00:21:39.726 "data_offset": 2048, 00:21:39.726 "data_size": 63488 00:21:39.726 }, 00:21:39.726 { 00:21:39.726 "name": null, 00:21:39.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.726 "is_configured": false, 00:21:39.726 "data_offset": 2048, 00:21:39.726 "data_size": 63488 00:21:39.726 }, 00:21:39.726 { 00:21:39.726 "name": "BaseBdev3", 00:21:39.726 "uuid": "57696702-378b-50bf-b2db-d4f3d419654e", 00:21:39.726 "is_configured": true, 00:21:39.726 "data_offset": 2048, 00:21:39.726 "data_size": 63488 00:21:39.726 }, 00:21:39.726 { 00:21:39.726 "name": "BaseBdev4", 00:21:39.726 "uuid": "1c189a7f-3db4-5237-bec5-3965cd3a79c4", 00:21:39.726 "is_configured": true, 00:21:39.726 "data_offset": 2048, 00:21:39.726 "data_size": 63488 00:21:39.726 } 00:21:39.726 ] 00:21:39.726 }' 00:21:39.726 16:39:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.985 16:39:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.244 16:39:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:40.244 "name": "raid_bdev1", 00:21:40.244 "uuid": "46b53b54-79c6-4a7a-9e65-e8628281410f", 00:21:40.244 "strip_size_kb": 0, 00:21:40.244 "state": "online", 00:21:40.244 "raid_level": "raid1", 00:21:40.244 "superblock": true, 00:21:40.244 "num_base_bdevs": 4, 00:21:40.244 "num_base_bdevs_discovered": 3, 00:21:40.244 "num_base_bdevs_operational": 3, 00:21:40.244 "base_bdevs_list": [ 00:21:40.244 { 00:21:40.244 "name": "spare", 00:21:40.244 "uuid": "39253975-c9dd-5120-b69e-a97ccd74d61b", 00:21:40.244 "is_configured": true, 00:21:40.244 "data_offset": 2048, 00:21:40.244 "data_size": 63488 00:21:40.244 }, 00:21:40.244 { 00:21:40.244 "name": null, 00:21:40.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.244 "is_configured": false, 00:21:40.244 "data_offset": 2048, 00:21:40.244 "data_size": 63488 00:21:40.244 }, 00:21:40.244 { 00:21:40.244 "name": "BaseBdev3", 00:21:40.244 "uuid": "57696702-378b-50bf-b2db-d4f3d419654e", 00:21:40.244 "is_configured": true, 00:21:40.244 "data_offset": 2048, 00:21:40.244 "data_size": 63488 00:21:40.244 }, 00:21:40.244 { 00:21:40.244 "name": "BaseBdev4", 00:21:40.244 "uuid": "1c189a7f-3db4-5237-bec5-3965cd3a79c4", 00:21:40.244 "is_configured": true, 00:21:40.244 "data_offset": 2048, 00:21:40.244 "data_size": 63488 00:21:40.244 } 00:21:40.244 ] 00:21:40.244 }' 00:21:40.244 16:39:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:40.244 16:39:11 -- common/autotest_common.sh@10 -- # set +x 00:21:40.811 16:39:12 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:41.069 [2024-07-13 16:39:12.391575] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:41.069 [2024-07-13 16:39:12.391922] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:41.069 [2024-07-13 16:39:12.392242] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:41.069 [2024-07-13 16:39:12.392500] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:41.070 [2024-07-13 16:39:12.392605] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:21:41.070 16:39:12 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:41.070 16:39:12 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.328 16:39:12 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:41.328 16:39:12 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:41.328 16:39:12 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:41.328 16:39:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:41.328 16:39:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:41.328 16:39:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:41.328 16:39:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:41.328 16:39:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:41.328 16:39:12 -- bdev/nbd_common.sh@12 -- # local i 00:21:41.328 16:39:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:41.328 16:39:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:41.328 16:39:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:41.617 /dev/nbd0 00:21:41.617 16:39:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:41.617 16:39:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:41.617 16:39:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:41.617 16:39:13 -- common/autotest_common.sh@857 -- # local i 00:21:41.617 16:39:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:41.617 16:39:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:41.617 16:39:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:41.617 16:39:13 -- common/autotest_common.sh@861 -- # break 00:21:41.617 16:39:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:41.617 16:39:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:41.617 16:39:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.617 1+0 records in 00:21:41.617 1+0 records out 00:21:41.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00076714 s, 5.3 MB/s 00:21:41.617 16:39:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.617 16:39:13 -- common/autotest_common.sh@874 -- # size=4096 00:21:41.617 16:39:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.617 16:39:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:41.617 16:39:13 -- common/autotest_common.sh@877 -- # return 0 00:21:41.617 16:39:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.617 16:39:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:41.617 16:39:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:41.886 /dev/nbd1 00:21:41.886 16:39:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:41.886 16:39:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:41.886 16:39:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:41.886 16:39:13 -- common/autotest_common.sh@857 -- # local i 00:21:41.886 16:39:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:41.886 16:39:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:41.886 16:39:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:42.145 16:39:13 -- common/autotest_common.sh@861 -- # break 00:21:42.145 16:39:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:42.145 16:39:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:42.145 16:39:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:42.145 1+0 records in 00:21:42.145 1+0 records out 00:21:42.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657421 s, 6.2 MB/s 00:21:42.145 16:39:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:42.145 16:39:13 -- common/autotest_common.sh@874 -- # size=4096 00:21:42.145 16:39:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:42.145 16:39:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:42.145 16:39:13 -- common/autotest_common.sh@877 -- # return 0 00:21:42.145 16:39:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:42.145 16:39:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:42.145 16:39:13 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:42.145 16:39:13 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:42.145 16:39:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:42.145 16:39:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:42.145 16:39:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:42.145 16:39:13 -- bdev/nbd_common.sh@51 -- # local i 00:21:42.145 16:39:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:42.145 16:39:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:42.403 16:39:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:42.403 16:39:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:42.403 16:39:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:42.403 16:39:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:42.403 16:39:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.404 16:39:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:42.404 16:39:13 -- bdev/nbd_common.sh@41 -- # break 00:21:42.404 16:39:13 -- bdev/nbd_common.sh@45 -- # return 0 00:21:42.404 16:39:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:42.404 16:39:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:42.661 16:39:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:42.661 16:39:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:42.661 16:39:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:42.661 16:39:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:42.661 16:39:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.661 16:39:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:42.661 16:39:14 -- bdev/nbd_common.sh@41 -- # break 00:21:42.661 16:39:14 -- bdev/nbd_common.sh@45 -- # return 0 00:21:42.661 16:39:14 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:42.661 16:39:14 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:42.661 16:39:14 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:42.661 16:39:14 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:42.919 16:39:14 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:43.177 [2024-07-13 16:39:14.516088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:43.177 [2024-07-13 16:39:14.516567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.177 [2024-07-13 16:39:14.516663] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:43.177 [2024-07-13 16:39:14.516780] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.177 [2024-07-13 16:39:14.519701] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.177 [2024-07-13 16:39:14.519911] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:43.177 [2024-07-13 16:39:14.520138] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:43.177 [2024-07-13 16:39:14.520356] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:43.177 BaseBdev1 00:21:43.177 16:39:14 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:43.177 16:39:14 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:21:43.177 16:39:14 -- bdev/bdev_raid.sh@696 -- # continue 00:21:43.177 16:39:14 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:43.177 16:39:14 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:43.177 16:39:14 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:43.437 16:39:14 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:43.696 [2024-07-13 16:39:15.012367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:43.696 [2024-07-13 16:39:15.012818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.696 [2024-07-13 16:39:15.012906] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:43.696 [2024-07-13 16:39:15.013078] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.696 [2024-07-13 16:39:15.013634] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.696 [2024-07-13 16:39:15.013832] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:43.696 [2024-07-13 16:39:15.014075] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:43.696 [2024-07-13 16:39:15.014180] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:21:43.696 [2024-07-13 16:39:15.014257] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.696 [2024-07-13 16:39:15.014330] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring 00:21:43.696 [2024-07-13 16:39:15.014450] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:43.696 BaseBdev3 00:21:43.696 16:39:15 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:43.696 16:39:15 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:21:43.696 16:39:15 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:21:43.955 16:39:15 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:43.955 [2024-07-13 16:39:15.424417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:43.955 [2024-07-13 16:39:15.424815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.955 [2024-07-13 16:39:15.424905] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:43.955 [2024-07-13 16:39:15.425018] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.955 [2024-07-13 16:39:15.425582] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.955 [2024-07-13 16:39:15.425756] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:43.955 [2024-07-13 16:39:15.425934] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:21:44.214 [2024-07-13 16:39:15.426052] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:44.214 BaseBdev4 00:21:44.214 16:39:15 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:44.214 16:39:15 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:44.473 [2024-07-13 16:39:15.836598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:44.473 [2024-07-13 16:39:15.837006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.473 [2024-07-13 16:39:15.837086] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:21:44.473 [2024-07-13 16:39:15.837230] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.473 [2024-07-13 16:39:15.837815] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.473 [2024-07-13 16:39:15.837980] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:44.473 [2024-07-13 16:39:15.838162] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:44.473 [2024-07-13 16:39:15.838268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:44.473 spare 00:21:44.473 16:39:15 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:44.473 16:39:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:44.473 16:39:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:44.473 16:39:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:44.473 16:39:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:44.473 16:39:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:44.473 16:39:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:44.473 16:39:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:44.473 16:39:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:44.473 16:39:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:44.473 16:39:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.473 16:39:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.473 [2024-07-13 16:39:15.938479] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:21:44.473 [2024-07-13 16:39:15.938764] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:44.473 [2024-07-13 16:39:15.939029] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caf0b0 00:21:44.473 [2024-07-13 16:39:15.939653] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:21:44.473 [2024-07-13 16:39:15.939766] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:21:44.473 [2024-07-13 16:39:15.940001] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.733 16:39:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:44.733 "name": "raid_bdev1", 00:21:44.733 "uuid": "46b53b54-79c6-4a7a-9e65-e8628281410f", 00:21:44.733 "strip_size_kb": 0, 00:21:44.733 "state": "online", 00:21:44.733 "raid_level": "raid1", 00:21:44.733 "superblock": true, 00:21:44.733 "num_base_bdevs": 4, 00:21:44.733 "num_base_bdevs_discovered": 3, 00:21:44.733 "num_base_bdevs_operational": 3, 00:21:44.733 "base_bdevs_list": [ 00:21:44.733 { 00:21:44.733 "name": "spare", 00:21:44.733 "uuid": "39253975-c9dd-5120-b69e-a97ccd74d61b", 00:21:44.733 "is_configured": true, 00:21:44.733 "data_offset": 2048, 00:21:44.733 "data_size": 63488 00:21:44.733 }, 00:21:44.733 { 00:21:44.733 "name": null, 00:21:44.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.733 "is_configured": false, 00:21:44.733 "data_offset": 2048, 00:21:44.733 "data_size": 63488 00:21:44.733 }, 00:21:44.733 { 00:21:44.733 "name": "BaseBdev3", 00:21:44.733 "uuid": "57696702-378b-50bf-b2db-d4f3d419654e", 00:21:44.733 "is_configured": true, 00:21:44.733 "data_offset": 2048, 00:21:44.733 "data_size": 63488 00:21:44.733 }, 00:21:44.733 { 00:21:44.733 "name": "BaseBdev4", 00:21:44.733 "uuid": "1c189a7f-3db4-5237-bec5-3965cd3a79c4", 00:21:44.733 "is_configured": true, 00:21:44.733 "data_offset": 2048, 00:21:44.733 "data_size": 63488 00:21:44.733 } 00:21:44.733 ] 00:21:44.733 }' 00:21:44.733 16:39:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:44.733 16:39:16 -- common/autotest_common.sh@10 -- # set +x 00:21:45.301 16:39:16 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:45.301 16:39:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:45.301 16:39:16 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:45.301 16:39:16 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:45.301 16:39:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:45.301 16:39:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.301 16:39:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.560 16:39:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:45.560 "name": "raid_bdev1", 00:21:45.560 "uuid": "46b53b54-79c6-4a7a-9e65-e8628281410f", 00:21:45.560 "strip_size_kb": 0, 00:21:45.560 "state": "online", 00:21:45.560 "raid_level": "raid1", 00:21:45.560 "superblock": true, 00:21:45.560 "num_base_bdevs": 4, 00:21:45.560 "num_base_bdevs_discovered": 3, 00:21:45.560 "num_base_bdevs_operational": 3, 00:21:45.560 "base_bdevs_list": [ 00:21:45.560 { 00:21:45.560 "name": "spare", 00:21:45.560 "uuid": "39253975-c9dd-5120-b69e-a97ccd74d61b", 00:21:45.560 "is_configured": true, 00:21:45.560 "data_offset": 2048, 00:21:45.560 "data_size": 63488 00:21:45.560 }, 00:21:45.560 { 00:21:45.560 "name": null, 00:21:45.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.560 "is_configured": false, 00:21:45.560 "data_offset": 2048, 00:21:45.560 "data_size": 63488 00:21:45.560 }, 00:21:45.560 { 00:21:45.560 "name": "BaseBdev3", 00:21:45.560 "uuid": "57696702-378b-50bf-b2db-d4f3d419654e", 00:21:45.560 "is_configured": true, 00:21:45.560 "data_offset": 2048, 00:21:45.560 "data_size": 63488 00:21:45.560 }, 00:21:45.560 { 00:21:45.560 "name": "BaseBdev4", 00:21:45.560 "uuid": "1c189a7f-3db4-5237-bec5-3965cd3a79c4", 00:21:45.560 "is_configured": true, 00:21:45.560 "data_offset": 2048, 00:21:45.560 "data_size": 63488 00:21:45.560 } 00:21:45.560 ] 00:21:45.560 }' 00:21:45.560 16:39:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:45.819 16:39:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:45.819 16:39:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:45.819 16:39:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:45.819 16:39:17 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:45.819 16:39:17 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.079 16:39:17 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:46.079 16:39:17 -- bdev/bdev_raid.sh@709 -- # killprocess 135724 00:21:46.079 16:39:17 -- common/autotest_common.sh@926 -- # '[' -z 135724 ']' 00:21:46.079 16:39:17 -- common/autotest_common.sh@930 -- # kill -0 135724 00:21:46.079 16:39:17 -- common/autotest_common.sh@931 -- # uname 00:21:46.079 16:39:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:46.079 16:39:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135724 00:21:46.079 killing process with pid 135724 00:21:46.079 Received shutdown signal, test time was about 60.000000 seconds 00:21:46.079 00:21:46.079 Latency(us) 00:21:46.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.079 =================================================================================================================== 00:21:46.079 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:46.079 16:39:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:46.079 16:39:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:46.079 16:39:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135724' 00:21:46.079 16:39:17 -- common/autotest_common.sh@945 -- # kill 135724 00:21:46.079 16:39:17 -- common/autotest_common.sh@950 -- # wait 135724 00:21:46.079 [2024-07-13 16:39:17.396203] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:46.079 [2024-07-13 16:39:17.396347] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.079 [2024-07-13 16:39:17.396456] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.079 [2024-07-13 16:39:17.396466] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:21:46.079 [2024-07-13 16:39:17.496646] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:46.648 16:39:17 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:46.648 00:21:46.648 real 0m26.478s 00:21:46.648 user 0m38.253s 00:21:46.648 sys 0m5.335s 00:21:46.648 16:39:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.648 16:39:17 -- common/autotest_common.sh@10 -- # set +x 00:21:46.648 ************************************ 00:21:46.648 END TEST raid_rebuild_test_sb 00:21:46.648 ************************************ 00:21:46.648 16:39:17 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:21:46.648 16:39:17 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:46.648 16:39:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:46.648 16:39:17 -- common/autotest_common.sh@10 -- # set +x 00:21:46.648 ************************************ 00:21:46.648 START TEST raid_rebuild_test_io 00:21:46.648 ************************************ 00:21:46.648 16:39:18 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@544 -- # raid_pid=136371 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:46.648 16:39:18 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136371 /var/tmp/spdk-raid.sock 00:21:46.648 16:39:18 -- common/autotest_common.sh@819 -- # '[' -z 136371 ']' 00:21:46.648 16:39:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:46.648 16:39:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:46.648 16:39:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:46.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:46.648 16:39:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:46.648 16:39:18 -- common/autotest_common.sh@10 -- # set +x 00:21:46.648 [2024-07-13 16:39:18.094271] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:46.648 [2024-07-13 16:39:18.094564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136371 ] 00:21:46.648 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:46.648 Zero copy mechanism will not be used. 00:21:46.908 [2024-07-13 16:39:18.252919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.908 [2024-07-13 16:39:18.341794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.167 [2024-07-13 16:39:18.424491] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:47.735 16:39:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:47.735 16:39:18 -- common/autotest_common.sh@852 -- # return 0 00:21:47.735 16:39:18 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:47.735 16:39:18 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:47.735 16:39:18 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:48.000 BaseBdev1 00:21:48.000 16:39:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:48.000 16:39:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:48.000 16:39:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:48.000 BaseBdev2 00:21:48.264 16:39:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:48.264 16:39:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:48.264 16:39:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:48.521 BaseBdev3 00:21:48.521 16:39:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:48.521 16:39:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:48.521 16:39:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:48.521 BaseBdev4 00:21:48.521 16:39:19 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:48.779 spare_malloc 00:21:48.779 16:39:20 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:49.037 spare_delay 00:21:49.037 16:39:20 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:49.296 [2024-07-13 16:39:20.585356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:49.296 [2024-07-13 16:39:20.585528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.296 [2024-07-13 16:39:20.585579] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:49.296 [2024-07-13 16:39:20.585638] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.296 [2024-07-13 16:39:20.588815] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.296 [2024-07-13 16:39:20.588906] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:49.296 spare 00:21:49.296 16:39:20 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:49.555 [2024-07-13 16:39:20.793527] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:49.555 [2024-07-13 16:39:20.796157] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:49.555 [2024-07-13 16:39:20.796233] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:49.555 [2024-07-13 16:39:20.796288] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:49.555 [2024-07-13 16:39:20.796385] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:21:49.555 [2024-07-13 16:39:20.796395] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:49.555 [2024-07-13 16:39:20.796659] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:21:49.555 [2024-07-13 16:39:20.797140] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:21:49.555 [2024-07-13 16:39:20.797161] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:21:49.555 [2024-07-13 16:39:20.797480] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.555 16:39:20 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:49.555 16:39:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:49.555 16:39:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:49.555 16:39:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:49.555 16:39:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:49.555 16:39:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:49.555 16:39:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:49.555 16:39:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:49.555 16:39:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:49.555 16:39:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:49.555 16:39:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.555 16:39:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.814 16:39:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:49.814 "name": "raid_bdev1", 00:21:49.814 "uuid": "d403ada3-5911-49b3-b4ec-8dea7c257cca", 00:21:49.814 "strip_size_kb": 0, 00:21:49.814 "state": "online", 00:21:49.814 "raid_level": "raid1", 00:21:49.814 "superblock": false, 00:21:49.814 "num_base_bdevs": 4, 00:21:49.814 "num_base_bdevs_discovered": 4, 00:21:49.814 "num_base_bdevs_operational": 4, 00:21:49.814 "base_bdevs_list": [ 00:21:49.814 { 00:21:49.814 "name": "BaseBdev1", 00:21:49.814 "uuid": "9d4a2753-7bf1-4642-8e03-b6a8ce66e145", 00:21:49.814 "is_configured": true, 00:21:49.814 "data_offset": 0, 00:21:49.814 "data_size": 65536 00:21:49.814 }, 00:21:49.814 { 00:21:49.814 "name": "BaseBdev2", 00:21:49.814 "uuid": "c8e1b2ee-784e-48b6-aaf3-b2e70973ecc6", 00:21:49.814 "is_configured": true, 00:21:49.814 "data_offset": 0, 00:21:49.814 "data_size": 65536 00:21:49.814 }, 00:21:49.814 { 00:21:49.814 "name": "BaseBdev3", 00:21:49.814 "uuid": "2c3a5163-dde8-49cc-a258-08a455797ce9", 00:21:49.814 "is_configured": true, 00:21:49.814 "data_offset": 0, 00:21:49.814 "data_size": 65536 00:21:49.814 }, 00:21:49.814 { 00:21:49.814 "name": "BaseBdev4", 00:21:49.814 "uuid": "e7fa85da-293b-44b7-86c0-90c9622cb089", 00:21:49.814 "is_configured": true, 00:21:49.814 "data_offset": 0, 00:21:49.814 "data_size": 65536 00:21:49.814 } 00:21:49.814 ] 00:21:49.814 }' 00:21:49.814 16:39:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:49.814 16:39:21 -- common/autotest_common.sh@10 -- # set +x 00:21:50.381 16:39:21 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:50.381 16:39:21 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:50.639 [2024-07-13 16:39:21.889894] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.639 16:39:21 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:50.639 16:39:21 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.639 16:39:21 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:50.898 16:39:22 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:50.898 16:39:22 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:50.898 16:39:22 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:50.898 16:39:22 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:50.898 [2024-07-13 16:39:22.285598] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:21:50.898 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:50.898 Zero copy mechanism will not be used. 00:21:50.898 Running I/O for 60 seconds... 00:21:50.898 [2024-07-13 16:39:22.362712] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:51.160 [2024-07-13 16:39:22.369372] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:21:51.160 16:39:22 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:51.160 16:39:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:51.160 16:39:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:51.160 16:39:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:51.160 16:39:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:51.160 16:39:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:51.160 16:39:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:51.160 16:39:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:51.160 16:39:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:51.160 16:39:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:51.160 16:39:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.160 16:39:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.424 16:39:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:51.424 "name": "raid_bdev1", 00:21:51.424 "uuid": "d403ada3-5911-49b3-b4ec-8dea7c257cca", 00:21:51.424 "strip_size_kb": 0, 00:21:51.424 "state": "online", 00:21:51.424 "raid_level": "raid1", 00:21:51.424 "superblock": false, 00:21:51.424 "num_base_bdevs": 4, 00:21:51.424 "num_base_bdevs_discovered": 3, 00:21:51.424 "num_base_bdevs_operational": 3, 00:21:51.424 "base_bdevs_list": [ 00:21:51.424 { 00:21:51.424 "name": null, 00:21:51.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.424 "is_configured": false, 00:21:51.424 "data_offset": 0, 00:21:51.424 "data_size": 65536 00:21:51.424 }, 00:21:51.424 { 00:21:51.424 "name": "BaseBdev2", 00:21:51.424 "uuid": "c8e1b2ee-784e-48b6-aaf3-b2e70973ecc6", 00:21:51.424 "is_configured": true, 00:21:51.424 "data_offset": 0, 00:21:51.424 "data_size": 65536 00:21:51.424 }, 00:21:51.424 { 00:21:51.424 "name": "BaseBdev3", 00:21:51.424 "uuid": "2c3a5163-dde8-49cc-a258-08a455797ce9", 00:21:51.424 "is_configured": true, 00:21:51.424 "data_offset": 0, 00:21:51.424 "data_size": 65536 00:21:51.424 }, 00:21:51.424 { 00:21:51.424 "name": "BaseBdev4", 00:21:51.424 "uuid": "e7fa85da-293b-44b7-86c0-90c9622cb089", 00:21:51.424 "is_configured": true, 00:21:51.424 "data_offset": 0, 00:21:51.424 "data_size": 65536 00:21:51.424 } 00:21:51.424 ] 00:21:51.424 }' 00:21:51.424 16:39:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:51.424 16:39:22 -- common/autotest_common.sh@10 -- # set +x 00:21:51.990 16:39:23 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:52.247 [2024-07-13 16:39:23.485790] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:52.247 [2024-07-13 16:39:23.485895] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:52.247 16:39:23 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:52.247 [2024-07-13 16:39:23.544884] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:52.247 [2024-07-13 16:39:23.547719] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:52.247 [2024-07-13 16:39:23.668449] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:52.247 [2024-07-13 16:39:23.670236] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:52.506 [2024-07-13 16:39:23.907176] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:52.506 [2024-07-13 16:39:23.908115] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:53.073 [2024-07-13 16:39:24.261118] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:53.073 [2024-07-13 16:39:24.261898] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:53.073 [2024-07-13 16:39:24.488640] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:53.073 16:39:24 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.073 16:39:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:53.073 16:39:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:53.073 16:39:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:53.073 16:39:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:53.073 16:39:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.331 16:39:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.331 [2024-07-13 16:39:24.771934] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:53.591 16:39:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:53.591 "name": "raid_bdev1", 00:21:53.591 "uuid": "d403ada3-5911-49b3-b4ec-8dea7c257cca", 00:21:53.591 "strip_size_kb": 0, 00:21:53.591 "state": "online", 00:21:53.591 "raid_level": "raid1", 00:21:53.591 "superblock": false, 00:21:53.591 "num_base_bdevs": 4, 00:21:53.591 "num_base_bdevs_discovered": 4, 00:21:53.591 "num_base_bdevs_operational": 4, 00:21:53.591 "process": { 00:21:53.591 "type": "rebuild", 00:21:53.591 "target": "spare", 00:21:53.591 "progress": { 00:21:53.591 "blocks": 14336, 00:21:53.591 "percent": 21 00:21:53.591 } 00:21:53.591 }, 00:21:53.591 "base_bdevs_list": [ 00:21:53.591 { 00:21:53.591 "name": "spare", 00:21:53.591 "uuid": "5174be6e-f508-553c-aff0-41f8a879820a", 00:21:53.591 "is_configured": true, 00:21:53.591 "data_offset": 0, 00:21:53.591 "data_size": 65536 00:21:53.591 }, 00:21:53.591 { 00:21:53.591 "name": "BaseBdev2", 00:21:53.591 "uuid": "c8e1b2ee-784e-48b6-aaf3-b2e70973ecc6", 00:21:53.591 "is_configured": true, 00:21:53.591 "data_offset": 0, 00:21:53.591 "data_size": 65536 00:21:53.591 }, 00:21:53.591 { 00:21:53.591 "name": "BaseBdev3", 00:21:53.591 "uuid": "2c3a5163-dde8-49cc-a258-08a455797ce9", 00:21:53.591 "is_configured": true, 00:21:53.591 "data_offset": 0, 00:21:53.591 "data_size": 65536 00:21:53.591 }, 00:21:53.591 { 00:21:53.591 "name": "BaseBdev4", 00:21:53.591 "uuid": "e7fa85da-293b-44b7-86c0-90c9622cb089", 00:21:53.591 "is_configured": true, 00:21:53.591 "data_offset": 0, 00:21:53.591 "data_size": 65536 00:21:53.591 } 00:21:53.591 ] 00:21:53.591 }' 00:21:53.591 16:39:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:53.591 16:39:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.591 16:39:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:53.591 16:39:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.591 16:39:24 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:53.591 [2024-07-13 16:39:24.996714] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:53.849 [2024-07-13 16:39:25.237409] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:53.849 [2024-07-13 16:39:25.245842] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:54.108 [2024-07-13 16:39:25.354815] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:54.108 [2024-07-13 16:39:25.367442] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.108 [2024-07-13 16:39:25.392540] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:21:54.108 16:39:25 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:54.108 16:39:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:54.108 16:39:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:54.108 16:39:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:54.108 16:39:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:54.108 16:39:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:54.108 16:39:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:54.108 16:39:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:54.108 16:39:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:54.108 16:39:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:54.108 16:39:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.108 16:39:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.366 16:39:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:54.366 "name": "raid_bdev1", 00:21:54.366 "uuid": "d403ada3-5911-49b3-b4ec-8dea7c257cca", 00:21:54.366 "strip_size_kb": 0, 00:21:54.366 "state": "online", 00:21:54.366 "raid_level": "raid1", 00:21:54.366 "superblock": false, 00:21:54.366 "num_base_bdevs": 4, 00:21:54.366 "num_base_bdevs_discovered": 3, 00:21:54.366 "num_base_bdevs_operational": 3, 00:21:54.366 "base_bdevs_list": [ 00:21:54.366 { 00:21:54.366 "name": null, 00:21:54.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.366 "is_configured": false, 00:21:54.366 "data_offset": 0, 00:21:54.366 "data_size": 65536 00:21:54.366 }, 00:21:54.366 { 00:21:54.366 "name": "BaseBdev2", 00:21:54.366 "uuid": "c8e1b2ee-784e-48b6-aaf3-b2e70973ecc6", 00:21:54.366 "is_configured": true, 00:21:54.366 "data_offset": 0, 00:21:54.366 "data_size": 65536 00:21:54.366 }, 00:21:54.366 { 00:21:54.366 "name": "BaseBdev3", 00:21:54.366 "uuid": "2c3a5163-dde8-49cc-a258-08a455797ce9", 00:21:54.366 "is_configured": true, 00:21:54.366 "data_offset": 0, 00:21:54.366 "data_size": 65536 00:21:54.366 }, 00:21:54.366 { 00:21:54.366 "name": "BaseBdev4", 00:21:54.366 "uuid": "e7fa85da-293b-44b7-86c0-90c9622cb089", 00:21:54.366 "is_configured": true, 00:21:54.366 "data_offset": 0, 00:21:54.366 "data_size": 65536 00:21:54.366 } 00:21:54.366 ] 00:21:54.366 }' 00:21:54.366 16:39:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:54.366 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:21:55.301 16:39:26 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:55.301 16:39:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:55.301 16:39:26 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:55.301 16:39:26 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:55.301 16:39:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:55.301 16:39:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.301 16:39:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.301 16:39:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:55.301 "name": "raid_bdev1", 00:21:55.301 "uuid": "d403ada3-5911-49b3-b4ec-8dea7c257cca", 00:21:55.301 "strip_size_kb": 0, 00:21:55.301 "state": "online", 00:21:55.301 "raid_level": "raid1", 00:21:55.301 "superblock": false, 00:21:55.301 "num_base_bdevs": 4, 00:21:55.301 "num_base_bdevs_discovered": 3, 00:21:55.301 "num_base_bdevs_operational": 3, 00:21:55.301 "base_bdevs_list": [ 00:21:55.301 { 00:21:55.301 "name": null, 00:21:55.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.301 "is_configured": false, 00:21:55.301 "data_offset": 0, 00:21:55.301 "data_size": 65536 00:21:55.301 }, 00:21:55.301 { 00:21:55.301 "name": "BaseBdev2", 00:21:55.301 "uuid": "c8e1b2ee-784e-48b6-aaf3-b2e70973ecc6", 00:21:55.301 "is_configured": true, 00:21:55.301 "data_offset": 0, 00:21:55.301 "data_size": 65536 00:21:55.301 }, 00:21:55.301 { 00:21:55.301 "name": "BaseBdev3", 00:21:55.301 "uuid": "2c3a5163-dde8-49cc-a258-08a455797ce9", 00:21:55.301 "is_configured": true, 00:21:55.301 "data_offset": 0, 00:21:55.301 "data_size": 65536 00:21:55.301 }, 00:21:55.301 { 00:21:55.301 "name": "BaseBdev4", 00:21:55.301 "uuid": "e7fa85da-293b-44b7-86c0-90c9622cb089", 00:21:55.301 "is_configured": true, 00:21:55.301 "data_offset": 0, 00:21:55.301 "data_size": 65536 00:21:55.301 } 00:21:55.301 ] 00:21:55.301 }' 00:21:55.301 16:39:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:55.301 16:39:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:55.301 16:39:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:55.301 16:39:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:55.301 16:39:26 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:55.559 [2024-07-13 16:39:26.992136] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:55.559 [2024-07-13 16:39:26.992217] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:55.817 16:39:27 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:55.817 [2024-07-13 16:39:27.046522] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:21:55.817 [2024-07-13 16:39:27.049354] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:56.076 [2024-07-13 16:39:27.308619] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:56.076 [2024-07-13 16:39:27.309530] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:56.335 [2024-07-13 16:39:27.676936] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:56.335 [2024-07-13 16:39:27.678609] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:56.593 [2024-07-13 16:39:27.919280] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:56.593 [2024-07-13 16:39:27.920196] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:56.593 16:39:28 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.593 16:39:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:56.593 16:39:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:56.593 16:39:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:56.593 16:39:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:56.593 16:39:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.593 16:39:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.852 [2024-07-13 16:39:28.249679] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:56.852 16:39:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:56.852 "name": "raid_bdev1", 00:21:56.852 "uuid": "d403ada3-5911-49b3-b4ec-8dea7c257cca", 00:21:56.852 "strip_size_kb": 0, 00:21:56.852 "state": "online", 00:21:56.852 "raid_level": "raid1", 00:21:56.852 "superblock": false, 00:21:56.852 "num_base_bdevs": 4, 00:21:56.852 "num_base_bdevs_discovered": 4, 00:21:56.852 "num_base_bdevs_operational": 4, 00:21:56.852 "process": { 00:21:56.852 "type": "rebuild", 00:21:56.852 "target": "spare", 00:21:56.852 "progress": { 00:21:56.852 "blocks": 14336, 00:21:56.852 "percent": 21 00:21:56.852 } 00:21:56.852 }, 00:21:56.852 "base_bdevs_list": [ 00:21:56.852 { 00:21:56.852 "name": "spare", 00:21:56.852 "uuid": "5174be6e-f508-553c-aff0-41f8a879820a", 00:21:56.852 "is_configured": true, 00:21:56.852 "data_offset": 0, 00:21:56.852 "data_size": 65536 00:21:56.852 }, 00:21:56.852 { 00:21:56.852 "name": "BaseBdev2", 00:21:56.852 "uuid": "c8e1b2ee-784e-48b6-aaf3-b2e70973ecc6", 00:21:56.852 "is_configured": true, 00:21:56.852 "data_offset": 0, 00:21:56.852 "data_size": 65536 00:21:56.852 }, 00:21:56.852 { 00:21:56.852 "name": "BaseBdev3", 00:21:56.852 "uuid": "2c3a5163-dde8-49cc-a258-08a455797ce9", 00:21:56.852 "is_configured": true, 00:21:56.852 "data_offset": 0, 00:21:56.852 "data_size": 65536 00:21:56.852 }, 00:21:56.852 { 00:21:56.852 "name": "BaseBdev4", 00:21:56.852 "uuid": "e7fa85da-293b-44b7-86c0-90c9622cb089", 00:21:56.852 "is_configured": true, 00:21:56.852 "data_offset": 0, 00:21:56.852 "data_size": 65536 00:21:56.852 } 00:21:56.852 ] 00:21:56.852 }' 00:21:57.110 16:39:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:57.110 16:39:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.110 16:39:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:57.110 [2024-07-13 16:39:28.395534] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:57.110 [2024-07-13 16:39:28.395964] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:57.110 16:39:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.110 16:39:28 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:57.110 16:39:28 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:57.110 16:39:28 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:57.110 16:39:28 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:57.110 16:39:28 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:57.370 [2024-07-13 16:39:28.646202] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:57.370 [2024-07-13 16:39:28.673946] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:57.370 [2024-07-13 16:39:28.674718] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:57.370 [2024-07-13 16:39:28.682912] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002390 00:21:57.370 [2024-07-13 16:39:28.682961] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002600 00:21:57.370 16:39:28 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:57.370 16:39:28 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:57.370 16:39:28 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.370 16:39:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:57.370 16:39:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:57.370 16:39:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:57.370 16:39:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:57.370 16:39:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.370 16:39:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.370 [2024-07-13 16:39:28.806601] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:57.630 16:39:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:57.630 "name": "raid_bdev1", 00:21:57.630 "uuid": "d403ada3-5911-49b3-b4ec-8dea7c257cca", 00:21:57.630 "strip_size_kb": 0, 00:21:57.630 "state": "online", 00:21:57.630 "raid_level": "raid1", 00:21:57.630 "superblock": false, 00:21:57.630 "num_base_bdevs": 4, 00:21:57.630 "num_base_bdevs_discovered": 3, 00:21:57.630 "num_base_bdevs_operational": 3, 00:21:57.630 "process": { 00:21:57.630 "type": "rebuild", 00:21:57.630 "target": "spare", 00:21:57.630 "progress": { 00:21:57.630 "blocks": 24576, 00:21:57.630 "percent": 37 00:21:57.630 } 00:21:57.630 }, 00:21:57.630 "base_bdevs_list": [ 00:21:57.630 { 00:21:57.630 "name": "spare", 00:21:57.630 "uuid": "5174be6e-f508-553c-aff0-41f8a879820a", 00:21:57.630 "is_configured": true, 00:21:57.630 "data_offset": 0, 00:21:57.630 "data_size": 65536 00:21:57.630 }, 00:21:57.630 { 00:21:57.630 "name": null, 00:21:57.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.630 "is_configured": false, 00:21:57.630 "data_offset": 0, 00:21:57.630 "data_size": 65536 00:21:57.630 }, 00:21:57.630 { 00:21:57.630 "name": "BaseBdev3", 00:21:57.630 "uuid": "2c3a5163-dde8-49cc-a258-08a455797ce9", 00:21:57.630 "is_configured": true, 00:21:57.630 "data_offset": 0, 00:21:57.630 "data_size": 65536 00:21:57.630 }, 00:21:57.630 { 00:21:57.630 "name": "BaseBdev4", 00:21:57.630 "uuid": "e7fa85da-293b-44b7-86c0-90c9622cb089", 00:21:57.630 "is_configured": true, 00:21:57.630 "data_offset": 0, 00:21:57.630 "data_size": 65536 00:21:57.630 } 00:21:57.630 ] 00:21:57.630 }' 00:21:57.630 16:39:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:57.630 16:39:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.630 16:39:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:57.630 16:39:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.630 16:39:29 -- bdev/bdev_raid.sh@657 -- # local timeout=505 00:21:57.630 16:39:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:57.630 16:39:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.630 16:39:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:57.630 16:39:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:57.630 16:39:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:57.630 16:39:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:57.630 16:39:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.630 16:39:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.889 [2024-07-13 16:39:29.191977] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:57.889 16:39:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:57.889 "name": "raid_bdev1", 00:21:57.889 "uuid": "d403ada3-5911-49b3-b4ec-8dea7c257cca", 00:21:57.889 "strip_size_kb": 0, 00:21:57.889 "state": "online", 00:21:57.889 "raid_level": "raid1", 00:21:57.889 "superblock": false, 00:21:57.889 "num_base_bdevs": 4, 00:21:57.889 "num_base_bdevs_discovered": 3, 00:21:57.889 "num_base_bdevs_operational": 3, 00:21:57.889 "process": { 00:21:57.889 "type": "rebuild", 00:21:57.889 "target": "spare", 00:21:57.889 "progress": { 00:21:57.889 "blocks": 28672, 00:21:57.889 "percent": 43 00:21:57.889 } 00:21:57.889 }, 00:21:57.889 "base_bdevs_list": [ 00:21:57.889 { 00:21:57.889 "name": "spare", 00:21:57.889 "uuid": "5174be6e-f508-553c-aff0-41f8a879820a", 00:21:57.889 "is_configured": true, 00:21:57.889 "data_offset": 0, 00:21:57.889 "data_size": 65536 00:21:57.889 }, 00:21:57.889 { 00:21:57.889 "name": null, 00:21:57.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.889 "is_configured": false, 00:21:57.889 "data_offset": 0, 00:21:57.889 "data_size": 65536 00:21:57.889 }, 00:21:57.889 { 00:21:57.889 "name": "BaseBdev3", 00:21:57.889 "uuid": "2c3a5163-dde8-49cc-a258-08a455797ce9", 00:21:57.889 "is_configured": true, 00:21:57.889 "data_offset": 0, 00:21:57.889 "data_size": 65536 00:21:57.889 }, 00:21:57.889 { 00:21:57.889 "name": "BaseBdev4", 00:21:57.889 "uuid": "e7fa85da-293b-44b7-86c0-90c9622cb089", 00:21:57.889 "is_configured": true, 00:21:57.889 "data_offset": 0, 00:21:57.889 "data_size": 65536 00:21:57.889 } 00:21:57.889 ] 00:21:57.889 }' 00:21:57.889 16:39:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:58.147 16:39:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:58.147 16:39:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:58.147 16:39:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:58.147 16:39:29 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:58.405 [2024-07-13 16:39:29.679791] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:58.664 [2024-07-13 16:39:30.033600] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:58.923 [2024-07-13 16:39:30.151061] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:59.181 16:39:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:59.181 16:39:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:59.181 16:39:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:59.181 16:39:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:59.181 16:39:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:59.181 16:39:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:59.181 16:39:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.181 16:39:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.439 16:39:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:59.439 "name": "raid_bdev1", 00:21:59.439 "uuid": "d403ada3-5911-49b3-b4ec-8dea7c257cca", 00:21:59.439 "strip_size_kb": 0, 00:21:59.439 "state": "online", 00:21:59.439 "raid_level": "raid1", 00:21:59.439 "superblock": false, 00:21:59.439 "num_base_bdevs": 4, 00:21:59.439 "num_base_bdevs_discovered": 3, 00:21:59.439 "num_base_bdevs_operational": 3, 00:21:59.439 "process": { 00:21:59.439 "type": "rebuild", 00:21:59.439 "target": "spare", 00:21:59.439 "progress": { 00:21:59.439 "blocks": 49152, 00:21:59.439 "percent": 75 00:21:59.439 } 00:21:59.439 }, 00:21:59.439 "base_bdevs_list": [ 00:21:59.439 { 00:21:59.439 "name": "spare", 00:21:59.439 "uuid": "5174be6e-f508-553c-aff0-41f8a879820a", 00:21:59.439 "is_configured": true, 00:21:59.439 "data_offset": 0, 00:21:59.439 "data_size": 65536 00:21:59.439 }, 00:21:59.439 { 00:21:59.439 "name": null, 00:21:59.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.439 "is_configured": false, 00:21:59.439 "data_offset": 0, 00:21:59.439 "data_size": 65536 00:21:59.439 }, 00:21:59.439 { 00:21:59.439 "name": "BaseBdev3", 00:21:59.439 "uuid": "2c3a5163-dde8-49cc-a258-08a455797ce9", 00:21:59.439 "is_configured": true, 00:21:59.439 "data_offset": 0, 00:21:59.439 "data_size": 65536 00:21:59.439 }, 00:21:59.439 { 00:21:59.439 "name": "BaseBdev4", 00:21:59.440 "uuid": "e7fa85da-293b-44b7-86c0-90c9622cb089", 00:21:59.440 "is_configured": true, 00:21:59.440 "data_offset": 0, 00:21:59.440 "data_size": 65536 00:21:59.440 } 00:21:59.440 ] 00:21:59.440 }' 00:21:59.440 16:39:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:59.440 16:39:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:59.440 16:39:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:59.440 16:39:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:59.440 16:39:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:00.383 [2024-07-13 16:39:31.506125] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:00.383 [2024-07-13 16:39:31.606087] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:00.383 [2024-07-13 16:39:31.610395] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.642 16:39:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:00.642 16:39:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.642 16:39:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:00.642 16:39:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:00.642 16:39:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:00.642 16:39:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:00.642 16:39:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.642 16:39:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.901 16:39:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:00.901 "name": "raid_bdev1", 00:22:00.901 "uuid": "d403ada3-5911-49b3-b4ec-8dea7c257cca", 00:22:00.901 "strip_size_kb": 0, 00:22:00.901 "state": "online", 00:22:00.901 "raid_level": "raid1", 00:22:00.901 "superblock": false, 00:22:00.901 "num_base_bdevs": 4, 00:22:00.901 "num_base_bdevs_discovered": 3, 00:22:00.901 "num_base_bdevs_operational": 3, 00:22:00.901 "base_bdevs_list": [ 00:22:00.901 { 00:22:00.901 "name": "spare", 00:22:00.901 "uuid": "5174be6e-f508-553c-aff0-41f8a879820a", 00:22:00.901 "is_configured": true, 00:22:00.901 "data_offset": 0, 00:22:00.901 "data_size": 65536 00:22:00.901 }, 00:22:00.901 { 00:22:00.901 "name": null, 00:22:00.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.901 "is_configured": false, 00:22:00.901 "data_offset": 0, 00:22:00.901 "data_size": 65536 00:22:00.901 }, 00:22:00.901 { 00:22:00.901 "name": "BaseBdev3", 00:22:00.901 "uuid": "2c3a5163-dde8-49cc-a258-08a455797ce9", 00:22:00.901 "is_configured": true, 00:22:00.901 "data_offset": 0, 00:22:00.901 "data_size": 65536 00:22:00.901 }, 00:22:00.901 { 00:22:00.901 "name": "BaseBdev4", 00:22:00.901 "uuid": "e7fa85da-293b-44b7-86c0-90c9622cb089", 00:22:00.901 "is_configured": true, 00:22:00.901 "data_offset": 0, 00:22:00.901 "data_size": 65536 00:22:00.901 } 00:22:00.901 ] 00:22:00.901 }' 00:22:00.901 16:39:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:00.901 16:39:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:00.901 16:39:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:00.901 16:39:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:00.901 16:39:32 -- bdev/bdev_raid.sh@660 -- # break 00:22:00.901 16:39:32 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:00.901 16:39:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:00.901 16:39:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:00.901 16:39:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:00.901 16:39:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:00.901 16:39:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.901 16:39:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:01.160 "name": "raid_bdev1", 00:22:01.160 "uuid": "d403ada3-5911-49b3-b4ec-8dea7c257cca", 00:22:01.160 "strip_size_kb": 0, 00:22:01.160 "state": "online", 00:22:01.160 "raid_level": "raid1", 00:22:01.160 "superblock": false, 00:22:01.160 "num_base_bdevs": 4, 00:22:01.160 "num_base_bdevs_discovered": 3, 00:22:01.160 "num_base_bdevs_operational": 3, 00:22:01.160 "base_bdevs_list": [ 00:22:01.160 { 00:22:01.160 "name": "spare", 00:22:01.160 "uuid": "5174be6e-f508-553c-aff0-41f8a879820a", 00:22:01.160 "is_configured": true, 00:22:01.160 "data_offset": 0, 00:22:01.160 "data_size": 65536 00:22:01.160 }, 00:22:01.160 { 00:22:01.160 "name": null, 00:22:01.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.160 "is_configured": false, 00:22:01.160 "data_offset": 0, 00:22:01.160 "data_size": 65536 00:22:01.160 }, 00:22:01.160 { 00:22:01.160 "name": "BaseBdev3", 00:22:01.160 "uuid": "2c3a5163-dde8-49cc-a258-08a455797ce9", 00:22:01.160 "is_configured": true, 00:22:01.160 "data_offset": 0, 00:22:01.160 "data_size": 65536 00:22:01.160 }, 00:22:01.160 { 00:22:01.160 "name": "BaseBdev4", 00:22:01.160 "uuid": "e7fa85da-293b-44b7-86c0-90c9622cb089", 00:22:01.160 "is_configured": true, 00:22:01.160 "data_offset": 0, 00:22:01.160 "data_size": 65536 00:22:01.160 } 00:22:01.160 ] 00:22:01.160 }' 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.160 16:39:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.419 16:39:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:01.419 "name": "raid_bdev1", 00:22:01.419 "uuid": "d403ada3-5911-49b3-b4ec-8dea7c257cca", 00:22:01.419 "strip_size_kb": 0, 00:22:01.419 "state": "online", 00:22:01.419 "raid_level": "raid1", 00:22:01.419 "superblock": false, 00:22:01.419 "num_base_bdevs": 4, 00:22:01.419 "num_base_bdevs_discovered": 3, 00:22:01.419 "num_base_bdevs_operational": 3, 00:22:01.419 "base_bdevs_list": [ 00:22:01.419 { 00:22:01.419 "name": "spare", 00:22:01.419 "uuid": "5174be6e-f508-553c-aff0-41f8a879820a", 00:22:01.419 "is_configured": true, 00:22:01.419 "data_offset": 0, 00:22:01.419 "data_size": 65536 00:22:01.419 }, 00:22:01.419 { 00:22:01.419 "name": null, 00:22:01.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.419 "is_configured": false, 00:22:01.419 "data_offset": 0, 00:22:01.419 "data_size": 65536 00:22:01.419 }, 00:22:01.419 { 00:22:01.419 "name": "BaseBdev3", 00:22:01.419 "uuid": "2c3a5163-dde8-49cc-a258-08a455797ce9", 00:22:01.419 "is_configured": true, 00:22:01.419 "data_offset": 0, 00:22:01.419 "data_size": 65536 00:22:01.419 }, 00:22:01.419 { 00:22:01.419 "name": "BaseBdev4", 00:22:01.419 "uuid": "e7fa85da-293b-44b7-86c0-90c9622cb089", 00:22:01.419 "is_configured": true, 00:22:01.419 "data_offset": 0, 00:22:01.419 "data_size": 65536 00:22:01.419 } 00:22:01.419 ] 00:22:01.419 }' 00:22:01.419 16:39:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:01.419 16:39:32 -- common/autotest_common.sh@10 -- # set +x 00:22:01.984 16:39:33 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:02.258 [2024-07-13 16:39:33.666651] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:02.258 [2024-07-13 16:39:33.666717] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:02.258 00:22:02.258 Latency(us) 00:22:02.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.258 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:02.258 raid_bdev1 : 11.41 92.26 276.77 0.00 0.00 15312.20 431.06 120835.90 00:22:02.258 =================================================================================================================== 00:22:02.258 Total : 92.26 276.77 0.00 0.00 15312.20 431.06 120835.90 00:22:02.258 [2024-07-13 16:39:33.708315] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.258 [2024-07-13 16:39:33.708392] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:02.258 [2024-07-13 16:39:33.708520] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:02.258 [2024-07-13 16:39:33.708533] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:22:02.258 0 00:22:02.516 16:39:33 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.516 16:39:33 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:02.516 16:39:33 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:02.516 16:39:33 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:02.516 16:39:33 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:02.516 16:39:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:02.516 16:39:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:02.516 16:39:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:02.516 16:39:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:02.516 16:39:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:02.516 16:39:33 -- bdev/nbd_common.sh@12 -- # local i 00:22:02.516 16:39:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:02.516 16:39:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:02.516 16:39:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:02.774 /dev/nbd0 00:22:03.033 16:39:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:03.033 16:39:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:03.033 16:39:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:03.033 16:39:34 -- common/autotest_common.sh@857 -- # local i 00:22:03.033 16:39:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:03.033 16:39:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:03.033 16:39:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:03.033 16:39:34 -- common/autotest_common.sh@861 -- # break 00:22:03.033 16:39:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:03.033 16:39:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:03.033 16:39:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:03.033 1+0 records in 00:22:03.033 1+0 records out 00:22:03.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467891 s, 8.8 MB/s 00:22:03.033 16:39:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:03.033 16:39:34 -- common/autotest_common.sh@874 -- # size=4096 00:22:03.033 16:39:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:03.033 16:39:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:03.033 16:39:34 -- common/autotest_common.sh@877 -- # return 0 00:22:03.033 16:39:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:03.033 16:39:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:03.033 16:39:34 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:03.033 16:39:34 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:03.033 16:39:34 -- bdev/bdev_raid.sh@678 -- # continue 00:22:03.033 16:39:34 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:03.033 16:39:34 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:03.033 16:39:34 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:03.033 16:39:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:03.033 16:39:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:03.033 16:39:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:03.033 16:39:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:03.033 16:39:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:03.033 16:39:34 -- bdev/nbd_common.sh@12 -- # local i 00:22:03.033 16:39:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:03.033 16:39:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:03.033 16:39:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:03.292 /dev/nbd1 00:22:03.292 16:39:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:03.292 16:39:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:03.292 16:39:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:03.292 16:39:34 -- common/autotest_common.sh@857 -- # local i 00:22:03.292 16:39:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:03.292 16:39:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:03.292 16:39:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:03.292 16:39:34 -- common/autotest_common.sh@861 -- # break 00:22:03.292 16:39:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:03.292 16:39:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:03.292 16:39:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:03.292 1+0 records in 00:22:03.292 1+0 records out 00:22:03.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000786705 s, 5.2 MB/s 00:22:03.292 16:39:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:03.292 16:39:34 -- common/autotest_common.sh@874 -- # size=4096 00:22:03.292 16:39:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:03.292 16:39:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:03.292 16:39:34 -- common/autotest_common.sh@877 -- # return 0 00:22:03.292 16:39:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:03.292 16:39:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:03.292 16:39:34 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:03.292 16:39:34 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:03.292 16:39:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:03.292 16:39:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:03.292 16:39:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:03.292 16:39:34 -- bdev/nbd_common.sh@51 -- # local i 00:22:03.292 16:39:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:03.292 16:39:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:03.552 16:39:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@41 -- # break 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@45 -- # return 0 00:22:03.811 16:39:35 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:03.811 16:39:35 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:03.811 16:39:35 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@12 -- # local i 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:03.811 /dev/nbd1 00:22:03.811 16:39:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:04.070 16:39:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:04.071 16:39:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:04.071 16:39:35 -- common/autotest_common.sh@857 -- # local i 00:22:04.071 16:39:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:04.071 16:39:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:04.071 16:39:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:04.071 16:39:35 -- common/autotest_common.sh@861 -- # break 00:22:04.071 16:39:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:04.071 16:39:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:04.071 16:39:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.071 1+0 records in 00:22:04.071 1+0 records out 00:22:04.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797716 s, 5.1 MB/s 00:22:04.071 16:39:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.071 16:39:35 -- common/autotest_common.sh@874 -- # size=4096 00:22:04.071 16:39:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.071 16:39:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:04.071 16:39:35 -- common/autotest_common.sh@877 -- # return 0 00:22:04.071 16:39:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.071 16:39:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:04.071 16:39:35 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:04.071 16:39:35 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:04.071 16:39:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:04.071 16:39:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:04.071 16:39:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:04.071 16:39:35 -- bdev/nbd_common.sh@51 -- # local i 00:22:04.071 16:39:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:04.071 16:39:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@41 -- # break 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@45 -- # return 0 00:22:04.331 16:39:35 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@51 -- # local i 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:04.331 16:39:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:04.590 16:39:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:04.590 16:39:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:04.590 16:39:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:04.590 16:39:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:04.590 16:39:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:04.590 16:39:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:04.590 16:39:35 -- bdev/nbd_common.sh@41 -- # break 00:22:04.590 16:39:35 -- bdev/nbd_common.sh@45 -- # return 0 00:22:04.590 16:39:35 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:04.590 16:39:35 -- bdev/bdev_raid.sh@709 -- # killprocess 136371 00:22:04.590 16:39:35 -- common/autotest_common.sh@926 -- # '[' -z 136371 ']' 00:22:04.590 16:39:35 -- common/autotest_common.sh@930 -- # kill -0 136371 00:22:04.590 16:39:35 -- common/autotest_common.sh@931 -- # uname 00:22:04.590 16:39:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:04.590 16:39:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136371 00:22:04.590 16:39:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:04.590 16:39:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:04.590 16:39:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136371' 00:22:04.590 killing process with pid 136371 00:22:04.590 16:39:35 -- common/autotest_common.sh@945 -- # kill 136371 00:22:04.590 Received shutdown signal, test time was about 13.630152 seconds 00:22:04.590 00:22:04.590 Latency(us) 00:22:04.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.590 =================================================================================================================== 00:22:04.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:04.590 16:39:35 -- common/autotest_common.sh@950 -- # wait 136371 00:22:04.590 [2024-07-13 16:39:35.918879] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:04.590 [2024-07-13 16:39:36.009031] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:05.158 00:22:05.158 real 0m18.427s 00:22:05.158 user 0m28.358s 00:22:05.158 sys 0m3.342s 00:22:05.158 16:39:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.158 16:39:36 -- common/autotest_common.sh@10 -- # set +x 00:22:05.158 ************************************ 00:22:05.158 END TEST raid_rebuild_test_io 00:22:05.158 ************************************ 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:22:05.158 16:39:36 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:05.158 16:39:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:05.158 16:39:36 -- common/autotest_common.sh@10 -- # set +x 00:22:05.158 ************************************ 00:22:05.158 START TEST raid_rebuild_test_sb_io 00:22:05.158 ************************************ 00:22:05.158 16:39:36 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@544 -- # raid_pid=136876 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136876 /var/tmp/spdk-raid.sock 00:22:05.158 16:39:36 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:05.158 16:39:36 -- common/autotest_common.sh@819 -- # '[' -z 136876 ']' 00:22:05.158 16:39:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:05.158 16:39:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:05.158 16:39:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:05.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:05.158 16:39:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:05.158 16:39:36 -- common/autotest_common.sh@10 -- # set +x 00:22:05.158 [2024-07-13 16:39:36.610640] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:05.158 [2024-07-13 16:39:36.611218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136876 ] 00:22:05.158 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:05.158 Zero copy mechanism will not be used. 00:22:05.418 [2024-07-13 16:39:36.767131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.418 [2024-07-13 16:39:36.857564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.677 [2024-07-13 16:39:36.942745] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:06.242 16:39:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:06.242 16:39:37 -- common/autotest_common.sh@852 -- # return 0 00:22:06.242 16:39:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:06.242 16:39:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:06.242 16:39:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:06.242 BaseBdev1_malloc 00:22:06.242 16:39:37 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:06.499 [2024-07-13 16:39:37.872205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:06.499 [2024-07-13 16:39:37.872693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.499 [2024-07-13 16:39:37.872784] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:22:06.499 [2024-07-13 16:39:37.873077] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.499 [2024-07-13 16:39:37.876266] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.499 [2024-07-13 16:39:37.876549] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:06.499 BaseBdev1 00:22:06.499 16:39:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:06.499 16:39:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:06.499 16:39:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:06.757 BaseBdev2_malloc 00:22:06.757 16:39:38 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:07.016 [2024-07-13 16:39:38.365744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:07.016 [2024-07-13 16:39:38.366189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.016 [2024-07-13 16:39:38.366277] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:22:07.016 [2024-07-13 16:39:38.366410] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.016 [2024-07-13 16:39:38.369403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.016 [2024-07-13 16:39:38.369621] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:07.016 BaseBdev2 00:22:07.016 16:39:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:07.016 16:39:38 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:07.016 16:39:38 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:07.274 BaseBdev3_malloc 00:22:07.274 16:39:38 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:07.532 [2024-07-13 16:39:38.880618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:07.532 [2024-07-13 16:39:38.881063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.532 [2024-07-13 16:39:38.881157] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:07.532 [2024-07-13 16:39:38.881326] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.532 [2024-07-13 16:39:38.884436] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.532 [2024-07-13 16:39:38.884685] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:07.532 BaseBdev3 00:22:07.532 16:39:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:07.532 16:39:38 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:07.532 16:39:38 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:07.791 BaseBdev4_malloc 00:22:07.791 16:39:39 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:08.050 [2024-07-13 16:39:39.333854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:08.050 [2024-07-13 16:39:39.334282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.050 [2024-07-13 16:39:39.334467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:08.050 [2024-07-13 16:39:39.334612] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.050 [2024-07-13 16:39:39.337611] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.050 [2024-07-13 16:39:39.337833] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:08.050 BaseBdev4 00:22:08.050 16:39:39 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:08.308 spare_malloc 00:22:08.308 16:39:39 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:08.308 spare_delay 00:22:08.568 16:39:39 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:08.568 [2024-07-13 16:39:39.979095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:08.568 [2024-07-13 16:39:39.979485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.568 [2024-07-13 16:39:39.979566] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:08.568 [2024-07-13 16:39:39.979698] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.568 [2024-07-13 16:39:39.982851] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.568 [2024-07-13 16:39:39.983051] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:08.568 spare 00:22:08.568 16:39:40 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:08.827 [2024-07-13 16:39:40.223519] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:08.827 [2024-07-13 16:39:40.226519] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:08.827 [2024-07-13 16:39:40.226807] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:08.827 [2024-07-13 16:39:40.226894] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:08.827 [2024-07-13 16:39:40.227234] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:08.827 [2024-07-13 16:39:40.227353] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:08.827 [2024-07-13 16:39:40.227589] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:22:08.827 [2024-07-13 16:39:40.228168] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:08.827 [2024-07-13 16:39:40.228316] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:22:08.827 [2024-07-13 16:39:40.228676] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.827 16:39:40 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:08.827 16:39:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:08.827 16:39:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:08.827 16:39:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:08.827 16:39:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:08.827 16:39:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:08.827 16:39:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:08.827 16:39:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:08.827 16:39:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:08.827 16:39:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:08.827 16:39:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.827 16:39:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.085 16:39:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:09.085 "name": "raid_bdev1", 00:22:09.085 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:09.085 "strip_size_kb": 0, 00:22:09.085 "state": "online", 00:22:09.085 "raid_level": "raid1", 00:22:09.085 "superblock": true, 00:22:09.085 "num_base_bdevs": 4, 00:22:09.085 "num_base_bdevs_discovered": 4, 00:22:09.085 "num_base_bdevs_operational": 4, 00:22:09.085 "base_bdevs_list": [ 00:22:09.085 { 00:22:09.085 "name": "BaseBdev1", 00:22:09.085 "uuid": "390dafb9-aec2-577f-bb01-e0e75944878e", 00:22:09.085 "is_configured": true, 00:22:09.085 "data_offset": 2048, 00:22:09.085 "data_size": 63488 00:22:09.085 }, 00:22:09.085 { 00:22:09.085 "name": "BaseBdev2", 00:22:09.085 "uuid": "10021d32-3225-5f86-abff-5c4bf781e4b3", 00:22:09.085 "is_configured": true, 00:22:09.085 "data_offset": 2048, 00:22:09.085 "data_size": 63488 00:22:09.085 }, 00:22:09.085 { 00:22:09.085 "name": "BaseBdev3", 00:22:09.085 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:09.085 "is_configured": true, 00:22:09.085 "data_offset": 2048, 00:22:09.085 "data_size": 63488 00:22:09.085 }, 00:22:09.085 { 00:22:09.085 "name": "BaseBdev4", 00:22:09.085 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:09.085 "is_configured": true, 00:22:09.085 "data_offset": 2048, 00:22:09.085 "data_size": 63488 00:22:09.085 } 00:22:09.085 ] 00:22:09.085 }' 00:22:09.085 16:39:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:09.085 16:39:40 -- common/autotest_common.sh@10 -- # set +x 00:22:10.021 16:39:41 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:10.021 16:39:41 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:10.021 [2024-07-13 16:39:41.381106] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:10.021 16:39:41 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:10.021 16:39:41 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.021 16:39:41 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:10.279 16:39:41 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:10.279 16:39:41 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:10.279 16:39:41 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:10.279 16:39:41 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:10.538 [2024-07-13 16:39:41.768859] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:22:10.538 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:10.538 Zero copy mechanism will not be used. 00:22:10.538 Running I/O for 60 seconds... 00:22:10.539 [2024-07-13 16:39:41.895114] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:10.539 [2024-07-13 16:39:41.895720] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:22:10.539 16:39:41 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:10.539 16:39:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:10.539 16:39:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:10.539 16:39:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:10.539 16:39:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:10.539 16:39:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:10.539 16:39:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:10.539 16:39:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:10.539 16:39:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:10.539 16:39:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:10.539 16:39:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.539 16:39:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.798 16:39:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:10.798 "name": "raid_bdev1", 00:22:10.798 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:10.798 "strip_size_kb": 0, 00:22:10.798 "state": "online", 00:22:10.798 "raid_level": "raid1", 00:22:10.798 "superblock": true, 00:22:10.798 "num_base_bdevs": 4, 00:22:10.798 "num_base_bdevs_discovered": 3, 00:22:10.798 "num_base_bdevs_operational": 3, 00:22:10.798 "base_bdevs_list": [ 00:22:10.798 { 00:22:10.798 "name": null, 00:22:10.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.798 "is_configured": false, 00:22:10.798 "data_offset": 2048, 00:22:10.798 "data_size": 63488 00:22:10.798 }, 00:22:10.798 { 00:22:10.798 "name": "BaseBdev2", 00:22:10.798 "uuid": "10021d32-3225-5f86-abff-5c4bf781e4b3", 00:22:10.798 "is_configured": true, 00:22:10.798 "data_offset": 2048, 00:22:10.798 "data_size": 63488 00:22:10.798 }, 00:22:10.798 { 00:22:10.798 "name": "BaseBdev3", 00:22:10.798 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:10.798 "is_configured": true, 00:22:10.798 "data_offset": 2048, 00:22:10.798 "data_size": 63488 00:22:10.798 }, 00:22:10.798 { 00:22:10.798 "name": "BaseBdev4", 00:22:10.798 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:10.798 "is_configured": true, 00:22:10.798 "data_offset": 2048, 00:22:10.798 "data_size": 63488 00:22:10.798 } 00:22:10.798 ] 00:22:10.798 }' 00:22:10.798 16:39:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:10.798 16:39:42 -- common/autotest_common.sh@10 -- # set +x 00:22:11.381 16:39:42 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:11.659 [2024-07-13 16:39:43.047011] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:11.659 [2024-07-13 16:39:43.047384] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:11.659 16:39:43 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:11.659 [2024-07-13 16:39:43.114056] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:22:11.659 [2024-07-13 16:39:43.117433] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:11.924 [2024-07-13 16:39:43.239604] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:11.924 [2024-07-13 16:39:43.240733] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:11.924 [2024-07-13 16:39:43.365188] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:11.924 [2024-07-13 16:39:43.365935] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:12.491 [2024-07-13 16:39:43.730649] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:12.749 [2024-07-13 16:39:43.969792] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:12.749 [2024-07-13 16:39:43.970455] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:12.749 16:39:44 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.749 16:39:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:12.749 16:39:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:12.749 16:39:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:12.749 16:39:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:12.749 16:39:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.749 16:39:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.006 [2024-07-13 16:39:44.238670] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:13.006 16:39:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:13.006 "name": "raid_bdev1", 00:22:13.006 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:13.006 "strip_size_kb": 0, 00:22:13.006 "state": "online", 00:22:13.006 "raid_level": "raid1", 00:22:13.006 "superblock": true, 00:22:13.006 "num_base_bdevs": 4, 00:22:13.006 "num_base_bdevs_discovered": 4, 00:22:13.006 "num_base_bdevs_operational": 4, 00:22:13.006 "process": { 00:22:13.006 "type": "rebuild", 00:22:13.006 "target": "spare", 00:22:13.006 "progress": { 00:22:13.006 "blocks": 14336, 00:22:13.006 "percent": 22 00:22:13.006 } 00:22:13.006 }, 00:22:13.006 "base_bdevs_list": [ 00:22:13.006 { 00:22:13.006 "name": "spare", 00:22:13.006 "uuid": "548e93f2-dadc-5dce-8cba-6a4715aafcb4", 00:22:13.006 "is_configured": true, 00:22:13.006 "data_offset": 2048, 00:22:13.006 "data_size": 63488 00:22:13.006 }, 00:22:13.006 { 00:22:13.006 "name": "BaseBdev2", 00:22:13.006 "uuid": "10021d32-3225-5f86-abff-5c4bf781e4b3", 00:22:13.006 "is_configured": true, 00:22:13.006 "data_offset": 2048, 00:22:13.006 "data_size": 63488 00:22:13.006 }, 00:22:13.006 { 00:22:13.006 "name": "BaseBdev3", 00:22:13.006 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:13.006 "is_configured": true, 00:22:13.006 "data_offset": 2048, 00:22:13.006 "data_size": 63488 00:22:13.006 }, 00:22:13.006 { 00:22:13.006 "name": "BaseBdev4", 00:22:13.006 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:13.006 "is_configured": true, 00:22:13.006 "data_offset": 2048, 00:22:13.006 "data_size": 63488 00:22:13.006 } 00:22:13.006 ] 00:22:13.006 }' 00:22:13.006 16:39:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:13.006 16:39:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:13.006 16:39:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:13.007 16:39:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:13.007 16:39:44 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:13.264 [2024-07-13 16:39:44.695142] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:13.264 [2024-07-13 16:39:44.717486] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:13.523 [2024-07-13 16:39:44.834164] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:13.523 [2024-07-13 16:39:44.856656] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.523 [2024-07-13 16:39:44.888403] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:22:13.523 16:39:44 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:13.523 16:39:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:13.523 16:39:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:13.523 16:39:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:13.523 16:39:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:13.523 16:39:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:13.523 16:39:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:13.523 16:39:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:13.523 16:39:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:13.523 16:39:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:13.523 16:39:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.523 16:39:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.781 16:39:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:13.781 "name": "raid_bdev1", 00:22:13.781 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:13.781 "strip_size_kb": 0, 00:22:13.781 "state": "online", 00:22:13.781 "raid_level": "raid1", 00:22:13.781 "superblock": true, 00:22:13.781 "num_base_bdevs": 4, 00:22:13.781 "num_base_bdevs_discovered": 3, 00:22:13.781 "num_base_bdevs_operational": 3, 00:22:13.781 "base_bdevs_list": [ 00:22:13.781 { 00:22:13.781 "name": null, 00:22:13.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.781 "is_configured": false, 00:22:13.781 "data_offset": 2048, 00:22:13.781 "data_size": 63488 00:22:13.781 }, 00:22:13.781 { 00:22:13.781 "name": "BaseBdev2", 00:22:13.781 "uuid": "10021d32-3225-5f86-abff-5c4bf781e4b3", 00:22:13.781 "is_configured": true, 00:22:13.781 "data_offset": 2048, 00:22:13.781 "data_size": 63488 00:22:13.781 }, 00:22:13.781 { 00:22:13.781 "name": "BaseBdev3", 00:22:13.781 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:13.781 "is_configured": true, 00:22:13.781 "data_offset": 2048, 00:22:13.781 "data_size": 63488 00:22:13.781 }, 00:22:13.781 { 00:22:13.781 "name": "BaseBdev4", 00:22:13.781 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:13.781 "is_configured": true, 00:22:13.781 "data_offset": 2048, 00:22:13.781 "data_size": 63488 00:22:13.781 } 00:22:13.781 ] 00:22:13.781 }' 00:22:13.781 16:39:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:13.781 16:39:45 -- common/autotest_common.sh@10 -- # set +x 00:22:14.716 16:39:45 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:14.716 16:39:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:14.716 16:39:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:14.716 16:39:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:14.716 16:39:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:14.716 16:39:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.716 16:39:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.716 16:39:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:14.716 "name": "raid_bdev1", 00:22:14.716 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:14.716 "strip_size_kb": 0, 00:22:14.716 "state": "online", 00:22:14.716 "raid_level": "raid1", 00:22:14.716 "superblock": true, 00:22:14.716 "num_base_bdevs": 4, 00:22:14.716 "num_base_bdevs_discovered": 3, 00:22:14.717 "num_base_bdevs_operational": 3, 00:22:14.717 "base_bdevs_list": [ 00:22:14.717 { 00:22:14.717 "name": null, 00:22:14.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.717 "is_configured": false, 00:22:14.717 "data_offset": 2048, 00:22:14.717 "data_size": 63488 00:22:14.717 }, 00:22:14.717 { 00:22:14.717 "name": "BaseBdev2", 00:22:14.717 "uuid": "10021d32-3225-5f86-abff-5c4bf781e4b3", 00:22:14.717 "is_configured": true, 00:22:14.717 "data_offset": 2048, 00:22:14.717 "data_size": 63488 00:22:14.717 }, 00:22:14.717 { 00:22:14.717 "name": "BaseBdev3", 00:22:14.717 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:14.717 "is_configured": true, 00:22:14.717 "data_offset": 2048, 00:22:14.717 "data_size": 63488 00:22:14.717 }, 00:22:14.717 { 00:22:14.717 "name": "BaseBdev4", 00:22:14.717 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:14.717 "is_configured": true, 00:22:14.717 "data_offset": 2048, 00:22:14.717 "data_size": 63488 00:22:14.717 } 00:22:14.717 ] 00:22:14.717 }' 00:22:14.717 16:39:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:14.975 16:39:46 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:14.975 16:39:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:14.975 16:39:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:14.975 16:39:46 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:15.234 [2024-07-13 16:39:46.489876] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:15.234 [2024-07-13 16:39:46.490292] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:15.234 16:39:46 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:15.234 [2024-07-13 16:39:46.546475] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:22:15.234 [2024-07-13 16:39:46.549635] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:15.234 [2024-07-13 16:39:46.670622] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:15.234 [2024-07-13 16:39:46.671541] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:15.493 [2024-07-13 16:39:46.875537] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:15.493 [2024-07-13 16:39:46.876823] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:15.751 [2024-07-13 16:39:47.219770] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:15.751 [2024-07-13 16:39:47.220770] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:16.010 [2024-07-13 16:39:47.354100] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:16.010 [2024-07-13 16:39:47.355358] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:16.268 16:39:47 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:16.268 16:39:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:16.268 16:39:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:16.268 16:39:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:16.268 16:39:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:16.268 16:39:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.268 16:39:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.268 [2024-07-13 16:39:47.687517] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:16.527 16:39:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:16.527 "name": "raid_bdev1", 00:22:16.527 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:16.527 "strip_size_kb": 0, 00:22:16.527 "state": "online", 00:22:16.527 "raid_level": "raid1", 00:22:16.527 "superblock": true, 00:22:16.527 "num_base_bdevs": 4, 00:22:16.527 "num_base_bdevs_discovered": 4, 00:22:16.527 "num_base_bdevs_operational": 4, 00:22:16.527 "process": { 00:22:16.527 "type": "rebuild", 00:22:16.527 "target": "spare", 00:22:16.527 "progress": { 00:22:16.527 "blocks": 14336, 00:22:16.527 "percent": 22 00:22:16.527 } 00:22:16.527 }, 00:22:16.527 "base_bdevs_list": [ 00:22:16.527 { 00:22:16.527 "name": "spare", 00:22:16.527 "uuid": "548e93f2-dadc-5dce-8cba-6a4715aafcb4", 00:22:16.527 "is_configured": true, 00:22:16.527 "data_offset": 2048, 00:22:16.527 "data_size": 63488 00:22:16.527 }, 00:22:16.527 { 00:22:16.527 "name": "BaseBdev2", 00:22:16.527 "uuid": "10021d32-3225-5f86-abff-5c4bf781e4b3", 00:22:16.527 "is_configured": true, 00:22:16.527 "data_offset": 2048, 00:22:16.527 "data_size": 63488 00:22:16.527 }, 00:22:16.527 { 00:22:16.527 "name": "BaseBdev3", 00:22:16.527 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:16.527 "is_configured": true, 00:22:16.527 "data_offset": 2048, 00:22:16.527 "data_size": 63488 00:22:16.527 }, 00:22:16.527 { 00:22:16.527 "name": "BaseBdev4", 00:22:16.527 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:16.527 "is_configured": true, 00:22:16.527 "data_offset": 2048, 00:22:16.527 "data_size": 63488 00:22:16.527 } 00:22:16.527 ] 00:22:16.527 }' 00:22:16.527 16:39:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:16.527 16:39:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:16.527 16:39:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:16.527 [2024-07-13 16:39:47.918892] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:16.527 16:39:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:16.527 16:39:47 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:16.527 16:39:47 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:16.527 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:16.527 16:39:47 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:16.527 16:39:47 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:16.527 16:39:47 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:16.527 16:39:47 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:16.786 [2024-07-13 16:39:48.173100] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:17.044 [2024-07-13 16:39:48.323264] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000026d0 00:22:17.045 [2024-07-13 16:39:48.323671] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002940 00:22:17.045 [2024-07-13 16:39:48.459861] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:17.045 16:39:48 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:17.045 16:39:48 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:17.045 16:39:48 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:17.045 16:39:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:17.045 16:39:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:17.045 16:39:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:17.045 16:39:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:17.045 16:39:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.045 16:39:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.303 16:39:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:17.303 "name": "raid_bdev1", 00:22:17.303 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:17.303 "strip_size_kb": 0, 00:22:17.303 "state": "online", 00:22:17.303 "raid_level": "raid1", 00:22:17.303 "superblock": true, 00:22:17.303 "num_base_bdevs": 4, 00:22:17.303 "num_base_bdevs_discovered": 3, 00:22:17.303 "num_base_bdevs_operational": 3, 00:22:17.303 "process": { 00:22:17.303 "type": "rebuild", 00:22:17.303 "target": "spare", 00:22:17.303 "progress": { 00:22:17.303 "blocks": 24576, 00:22:17.303 "percent": 38 00:22:17.303 } 00:22:17.303 }, 00:22:17.303 "base_bdevs_list": [ 00:22:17.303 { 00:22:17.303 "name": "spare", 00:22:17.303 "uuid": "548e93f2-dadc-5dce-8cba-6a4715aafcb4", 00:22:17.303 "is_configured": true, 00:22:17.303 "data_offset": 2048, 00:22:17.303 "data_size": 63488 00:22:17.303 }, 00:22:17.303 { 00:22:17.303 "name": null, 00:22:17.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.303 "is_configured": false, 00:22:17.303 "data_offset": 2048, 00:22:17.303 "data_size": 63488 00:22:17.303 }, 00:22:17.303 { 00:22:17.303 "name": "BaseBdev3", 00:22:17.303 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:17.303 "is_configured": true, 00:22:17.303 "data_offset": 2048, 00:22:17.303 "data_size": 63488 00:22:17.303 }, 00:22:17.303 { 00:22:17.303 "name": "BaseBdev4", 00:22:17.303 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:17.303 "is_configured": true, 00:22:17.303 "data_offset": 2048, 00:22:17.303 "data_size": 63488 00:22:17.303 } 00:22:17.303 ] 00:22:17.303 }' 00:22:17.303 16:39:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:17.562 16:39:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:17.562 16:39:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:17.562 [2024-07-13 16:39:48.807623] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:17.562 16:39:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:17.562 16:39:48 -- bdev/bdev_raid.sh@657 -- # local timeout=524 00:22:17.562 16:39:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:17.562 16:39:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:17.562 16:39:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:17.562 16:39:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:17.562 16:39:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:17.562 16:39:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:17.562 16:39:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.562 16:39:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.562 [2024-07-13 16:39:49.029340] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:17.821 16:39:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:17.821 "name": "raid_bdev1", 00:22:17.821 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:17.821 "strip_size_kb": 0, 00:22:17.821 "state": "online", 00:22:17.821 "raid_level": "raid1", 00:22:17.821 "superblock": true, 00:22:17.821 "num_base_bdevs": 4, 00:22:17.821 "num_base_bdevs_discovered": 3, 00:22:17.821 "num_base_bdevs_operational": 3, 00:22:17.821 "process": { 00:22:17.821 "type": "rebuild", 00:22:17.821 "target": "spare", 00:22:17.821 "progress": { 00:22:17.821 "blocks": 28672, 00:22:17.821 "percent": 45 00:22:17.821 } 00:22:17.821 }, 00:22:17.821 "base_bdevs_list": [ 00:22:17.821 { 00:22:17.821 "name": "spare", 00:22:17.821 "uuid": "548e93f2-dadc-5dce-8cba-6a4715aafcb4", 00:22:17.821 "is_configured": true, 00:22:17.821 "data_offset": 2048, 00:22:17.821 "data_size": 63488 00:22:17.821 }, 00:22:17.821 { 00:22:17.821 "name": null, 00:22:17.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.821 "is_configured": false, 00:22:17.821 "data_offset": 2048, 00:22:17.821 "data_size": 63488 00:22:17.821 }, 00:22:17.821 { 00:22:17.821 "name": "BaseBdev3", 00:22:17.821 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:17.821 "is_configured": true, 00:22:17.821 "data_offset": 2048, 00:22:17.821 "data_size": 63488 00:22:17.821 }, 00:22:17.821 { 00:22:17.821 "name": "BaseBdev4", 00:22:17.821 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:17.821 "is_configured": true, 00:22:17.821 "data_offset": 2048, 00:22:17.821 "data_size": 63488 00:22:17.821 } 00:22:17.821 ] 00:22:17.821 }' 00:22:17.821 16:39:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:17.821 16:39:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:17.821 16:39:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:17.821 16:39:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:17.821 16:39:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:18.755 [2024-07-13 16:39:49.863800] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:18.755 16:39:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:18.755 16:39:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:18.755 16:39:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:18.755 16:39:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:18.755 16:39:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:18.755 16:39:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:18.755 16:39:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.755 16:39:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.013 16:39:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:19.013 "name": "raid_bdev1", 00:22:19.013 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:19.013 "strip_size_kb": 0, 00:22:19.013 "state": "online", 00:22:19.013 "raid_level": "raid1", 00:22:19.013 "superblock": true, 00:22:19.013 "num_base_bdevs": 4, 00:22:19.013 "num_base_bdevs_discovered": 3, 00:22:19.013 "num_base_bdevs_operational": 3, 00:22:19.013 "process": { 00:22:19.013 "type": "rebuild", 00:22:19.013 "target": "spare", 00:22:19.013 "progress": { 00:22:19.013 "blocks": 49152, 00:22:19.013 "percent": 77 00:22:19.013 } 00:22:19.013 }, 00:22:19.013 "base_bdevs_list": [ 00:22:19.013 { 00:22:19.013 "name": "spare", 00:22:19.013 "uuid": "548e93f2-dadc-5dce-8cba-6a4715aafcb4", 00:22:19.013 "is_configured": true, 00:22:19.013 "data_offset": 2048, 00:22:19.013 "data_size": 63488 00:22:19.013 }, 00:22:19.013 { 00:22:19.013 "name": null, 00:22:19.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.013 "is_configured": false, 00:22:19.013 "data_offset": 2048, 00:22:19.013 "data_size": 63488 00:22:19.013 }, 00:22:19.013 { 00:22:19.013 "name": "BaseBdev3", 00:22:19.013 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:19.013 "is_configured": true, 00:22:19.013 "data_offset": 2048, 00:22:19.013 "data_size": 63488 00:22:19.013 }, 00:22:19.013 { 00:22:19.013 "name": "BaseBdev4", 00:22:19.013 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:19.013 "is_configured": true, 00:22:19.013 "data_offset": 2048, 00:22:19.013 "data_size": 63488 00:22:19.013 } 00:22:19.013 ] 00:22:19.013 }' 00:22:19.013 16:39:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:19.013 [2024-07-13 16:39:50.456579] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:19.013 [2024-07-13 16:39:50.458163] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:19.013 16:39:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:19.013 16:39:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:19.271 16:39:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.271 16:39:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:19.271 [2024-07-13 16:39:50.662004] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:19.837 [2024-07-13 16:39:51.241636] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:20.095 [2024-07-13 16:39:51.347778] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:20.095 [2024-07-13 16:39:51.351798] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.095 16:39:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:20.095 16:39:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.095 16:39:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:20.095 16:39:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:20.095 16:39:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:20.095 16:39:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:20.095 16:39:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.095 16:39:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.353 16:39:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:20.353 "name": "raid_bdev1", 00:22:20.353 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:20.353 "strip_size_kb": 0, 00:22:20.353 "state": "online", 00:22:20.353 "raid_level": "raid1", 00:22:20.353 "superblock": true, 00:22:20.353 "num_base_bdevs": 4, 00:22:20.353 "num_base_bdevs_discovered": 3, 00:22:20.353 "num_base_bdevs_operational": 3, 00:22:20.353 "base_bdevs_list": [ 00:22:20.353 { 00:22:20.353 "name": "spare", 00:22:20.353 "uuid": "548e93f2-dadc-5dce-8cba-6a4715aafcb4", 00:22:20.353 "is_configured": true, 00:22:20.353 "data_offset": 2048, 00:22:20.353 "data_size": 63488 00:22:20.353 }, 00:22:20.353 { 00:22:20.353 "name": null, 00:22:20.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.353 "is_configured": false, 00:22:20.353 "data_offset": 2048, 00:22:20.353 "data_size": 63488 00:22:20.353 }, 00:22:20.353 { 00:22:20.353 "name": "BaseBdev3", 00:22:20.353 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:20.353 "is_configured": true, 00:22:20.353 "data_offset": 2048, 00:22:20.353 "data_size": 63488 00:22:20.353 }, 00:22:20.353 { 00:22:20.353 "name": "BaseBdev4", 00:22:20.353 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:20.353 "is_configured": true, 00:22:20.353 "data_offset": 2048, 00:22:20.353 "data_size": 63488 00:22:20.353 } 00:22:20.353 ] 00:22:20.353 }' 00:22:20.640 16:39:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:20.640 16:39:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:20.640 16:39:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:20.640 16:39:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:20.640 16:39:51 -- bdev/bdev_raid.sh@660 -- # break 00:22:20.640 16:39:51 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:20.640 16:39:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:20.640 16:39:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:20.640 16:39:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:20.640 16:39:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:20.640 16:39:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.640 16:39:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:20.899 "name": "raid_bdev1", 00:22:20.899 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:20.899 "strip_size_kb": 0, 00:22:20.899 "state": "online", 00:22:20.899 "raid_level": "raid1", 00:22:20.899 "superblock": true, 00:22:20.899 "num_base_bdevs": 4, 00:22:20.899 "num_base_bdevs_discovered": 3, 00:22:20.899 "num_base_bdevs_operational": 3, 00:22:20.899 "base_bdevs_list": [ 00:22:20.899 { 00:22:20.899 "name": "spare", 00:22:20.899 "uuid": "548e93f2-dadc-5dce-8cba-6a4715aafcb4", 00:22:20.899 "is_configured": true, 00:22:20.899 "data_offset": 2048, 00:22:20.899 "data_size": 63488 00:22:20.899 }, 00:22:20.899 { 00:22:20.899 "name": null, 00:22:20.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.899 "is_configured": false, 00:22:20.899 "data_offset": 2048, 00:22:20.899 "data_size": 63488 00:22:20.899 }, 00:22:20.899 { 00:22:20.899 "name": "BaseBdev3", 00:22:20.899 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:20.899 "is_configured": true, 00:22:20.899 "data_offset": 2048, 00:22:20.899 "data_size": 63488 00:22:20.899 }, 00:22:20.899 { 00:22:20.899 "name": "BaseBdev4", 00:22:20.899 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:20.899 "is_configured": true, 00:22:20.899 "data_offset": 2048, 00:22:20.899 "data_size": 63488 00:22:20.899 } 00:22:20.899 ] 00:22:20.899 }' 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.899 16:39:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.158 16:39:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:21.158 "name": "raid_bdev1", 00:22:21.158 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:21.158 "strip_size_kb": 0, 00:22:21.158 "state": "online", 00:22:21.158 "raid_level": "raid1", 00:22:21.158 "superblock": true, 00:22:21.158 "num_base_bdevs": 4, 00:22:21.158 "num_base_bdevs_discovered": 3, 00:22:21.158 "num_base_bdevs_operational": 3, 00:22:21.158 "base_bdevs_list": [ 00:22:21.158 { 00:22:21.158 "name": "spare", 00:22:21.158 "uuid": "548e93f2-dadc-5dce-8cba-6a4715aafcb4", 00:22:21.158 "is_configured": true, 00:22:21.158 "data_offset": 2048, 00:22:21.158 "data_size": 63488 00:22:21.158 }, 00:22:21.158 { 00:22:21.158 "name": null, 00:22:21.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.158 "is_configured": false, 00:22:21.158 "data_offset": 2048, 00:22:21.158 "data_size": 63488 00:22:21.158 }, 00:22:21.158 { 00:22:21.158 "name": "BaseBdev3", 00:22:21.158 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:21.158 "is_configured": true, 00:22:21.158 "data_offset": 2048, 00:22:21.158 "data_size": 63488 00:22:21.158 }, 00:22:21.158 { 00:22:21.158 "name": "BaseBdev4", 00:22:21.158 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:21.158 "is_configured": true, 00:22:21.159 "data_offset": 2048, 00:22:21.159 "data_size": 63488 00:22:21.159 } 00:22:21.159 ] 00:22:21.159 }' 00:22:21.159 16:39:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:21.159 16:39:52 -- common/autotest_common.sh@10 -- # set +x 00:22:21.724 16:39:53 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:21.982 [2024-07-13 16:39:53.434967] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:21.982 [2024-07-13 16:39:53.435350] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:22.240 00:22:22.240 Latency(us) 00:22:22.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.240 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:22.240 raid_bdev1 : 11.76 97.17 291.52 0.00 0.00 14638.77 434.96 120835.90 00:22:22.240 =================================================================================================================== 00:22:22.240 Total : 97.17 291.52 0.00 0.00 14638.77 434.96 120835.90 00:22:22.240 [2024-07-13 16:39:53.540321] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.240 [2024-07-13 16:39:53.540563] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:22.240 [2024-07-13 16:39:53.540742] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:22.240 0 00:22:22.240 [2024-07-13 16:39:53.540848] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:22:22.240 16:39:53 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.240 16:39:53 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:22.499 16:39:53 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:22.499 16:39:53 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:22.499 16:39:53 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:22.499 16:39:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:22.499 16:39:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:22.499 16:39:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:22.499 16:39:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:22.499 16:39:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:22.499 16:39:53 -- bdev/nbd_common.sh@12 -- # local i 00:22:22.499 16:39:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:22.499 16:39:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:22.499 16:39:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:22.757 /dev/nbd0 00:22:22.757 16:39:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:22.757 16:39:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:22.757 16:39:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:22.757 16:39:54 -- common/autotest_common.sh@857 -- # local i 00:22:22.757 16:39:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:22.757 16:39:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:22.757 16:39:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:22.757 16:39:54 -- common/autotest_common.sh@861 -- # break 00:22:22.757 16:39:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:22.757 16:39:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:22.757 16:39:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:22.757 1+0 records in 00:22:22.757 1+0 records out 00:22:22.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620305 s, 6.6 MB/s 00:22:22.757 16:39:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.757 16:39:54 -- common/autotest_common.sh@874 -- # size=4096 00:22:22.757 16:39:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.757 16:39:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:22.757 16:39:54 -- common/autotest_common.sh@877 -- # return 0 00:22:22.757 16:39:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:22.757 16:39:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:22.757 16:39:54 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:22.757 16:39:54 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:22.757 16:39:54 -- bdev/bdev_raid.sh@678 -- # continue 00:22:22.757 16:39:54 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:22.757 16:39:54 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:22.757 16:39:54 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:22.757 16:39:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:22.757 16:39:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:22.757 16:39:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:22.757 16:39:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:22.757 16:39:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:22.757 16:39:54 -- bdev/nbd_common.sh@12 -- # local i 00:22:22.757 16:39:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:22.757 16:39:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:22.757 16:39:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:23.014 /dev/nbd1 00:22:23.014 16:39:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:23.014 16:39:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:23.014 16:39:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:23.014 16:39:54 -- common/autotest_common.sh@857 -- # local i 00:22:23.014 16:39:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:23.014 16:39:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:23.014 16:39:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:23.015 16:39:54 -- common/autotest_common.sh@861 -- # break 00:22:23.015 16:39:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:23.015 16:39:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:23.015 16:39:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:23.015 1+0 records in 00:22:23.015 1+0 records out 00:22:23.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532474 s, 7.7 MB/s 00:22:23.015 16:39:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.015 16:39:54 -- common/autotest_common.sh@874 -- # size=4096 00:22:23.015 16:39:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.015 16:39:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:23.015 16:39:54 -- common/autotest_common.sh@877 -- # return 0 00:22:23.015 16:39:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:23.015 16:39:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:23.015 16:39:54 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:23.272 16:39:54 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:23.272 16:39:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:23.272 16:39:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:23.272 16:39:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:23.272 16:39:54 -- bdev/nbd_common.sh@51 -- # local i 00:22:23.272 16:39:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:23.272 16:39:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@41 -- # break 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@45 -- # return 0 00:22:23.530 16:39:54 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:23.530 16:39:54 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:23.530 16:39:54 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@12 -- # local i 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:23.530 16:39:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:23.789 /dev/nbd1 00:22:23.789 16:39:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:23.789 16:39:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:23.789 16:39:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:23.789 16:39:55 -- common/autotest_common.sh@857 -- # local i 00:22:23.789 16:39:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:23.789 16:39:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:23.789 16:39:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:23.789 16:39:55 -- common/autotest_common.sh@861 -- # break 00:22:23.789 16:39:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:23.789 16:39:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:23.789 16:39:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:23.789 1+0 records in 00:22:23.789 1+0 records out 00:22:23.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600325 s, 6.8 MB/s 00:22:23.789 16:39:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.789 16:39:55 -- common/autotest_common.sh@874 -- # size=4096 00:22:23.789 16:39:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.789 16:39:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:23.789 16:39:55 -- common/autotest_common.sh@877 -- # return 0 00:22:23.789 16:39:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:23.789 16:39:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:23.789 16:39:55 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:24.047 16:39:55 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:24.047 16:39:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:24.047 16:39:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:24.047 16:39:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:24.047 16:39:55 -- bdev/nbd_common.sh@51 -- # local i 00:22:24.047 16:39:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:24.047 16:39:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:24.047 16:39:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:24.047 16:39:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:24.047 16:39:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:24.047 16:39:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:24.047 16:39:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.047 16:39:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:24.303 16:39:55 -- bdev/nbd_common.sh@41 -- # break 00:22:24.303 16:39:55 -- bdev/nbd_common.sh@45 -- # return 0 00:22:24.303 16:39:55 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:24.303 16:39:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:24.303 16:39:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:24.303 16:39:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:24.303 16:39:55 -- bdev/nbd_common.sh@51 -- # local i 00:22:24.303 16:39:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:24.303 16:39:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:24.563 16:39:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:24.563 16:39:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:24.563 16:39:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:24.563 16:39:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:24.563 16:39:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.563 16:39:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:24.563 16:39:55 -- bdev/nbd_common.sh@41 -- # break 00:22:24.563 16:39:55 -- bdev/nbd_common.sh@45 -- # return 0 00:22:24.563 16:39:55 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:24.563 16:39:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:24.563 16:39:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:24.563 16:39:55 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:24.822 16:39:56 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:24.822 [2024-07-13 16:39:56.290726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:24.822 [2024-07-13 16:39:56.291185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.822 [2024-07-13 16:39:56.291281] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:24.822 [2024-07-13 16:39:56.291409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.080 [2024-07-13 16:39:56.294593] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.081 [2024-07-13 16:39:56.294852] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:25.081 [2024-07-13 16:39:56.295068] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:25.081 [2024-07-13 16:39:56.295275] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:25.081 BaseBdev1 00:22:25.081 16:39:56 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:25.081 16:39:56 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:25.081 16:39:56 -- bdev/bdev_raid.sh@696 -- # continue 00:22:25.081 16:39:56 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:25.081 16:39:56 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:25.081 16:39:56 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:25.081 16:39:56 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:25.338 [2024-07-13 16:39:56.727285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:25.339 [2024-07-13 16:39:56.727716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.339 [2024-07-13 16:39:56.727806] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:25.339 [2024-07-13 16:39:56.727919] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.339 [2024-07-13 16:39:56.728491] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.339 [2024-07-13 16:39:56.728689] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:25.339 [2024-07-13 16:39:56.728885] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:25.339 [2024-07-13 16:39:56.729009] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:25.339 [2024-07-13 16:39:56.729090] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:25.339 [2024-07-13 16:39:56.729158] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state configuring 00:22:25.339 [2024-07-13 16:39:56.729318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:25.339 BaseBdev3 00:22:25.339 16:39:56 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:25.339 16:39:56 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:25.339 16:39:56 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:25.597 16:39:56 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:25.855 [2024-07-13 16:39:57.127420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:25.855 [2024-07-13 16:39:57.127810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.855 [2024-07-13 16:39:57.127896] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:25.855 [2024-07-13 16:39:57.127998] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.855 [2024-07-13 16:39:57.128589] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.855 [2024-07-13 16:39:57.128754] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:25.855 [2024-07-13 16:39:57.128925] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:25.855 [2024-07-13 16:39:57.129028] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:25.855 BaseBdev4 00:22:25.855 16:39:57 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:26.114 16:39:57 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:26.114 [2024-07-13 16:39:57.519530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:26.114 [2024-07-13 16:39:57.519881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.114 [2024-07-13 16:39:57.519955] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:26.114 [2024-07-13 16:39:57.520090] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.114 [2024-07-13 16:39:57.520653] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.114 [2024-07-13 16:39:57.520813] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:26.114 [2024-07-13 16:39:57.520990] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:26.114 [2024-07-13 16:39:57.521105] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:26.114 spare 00:22:26.114 16:39:57 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:26.114 16:39:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:26.114 16:39:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:26.114 16:39:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:26.114 16:39:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:26.114 16:39:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:26.114 16:39:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:26.114 16:39:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:26.114 16:39:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:26.114 16:39:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:26.114 16:39:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.114 16:39:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.372 [2024-07-13 16:39:57.621313] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:22:26.372 [2024-07-13 16:39:57.621577] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:26.372 [2024-07-13 16:39:57.621816] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033bc0 00:22:26.372 [2024-07-13 16:39:57.622402] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:22:26.372 [2024-07-13 16:39:57.622507] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:22:26.372 [2024-07-13 16:39:57.622745] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.372 16:39:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:26.372 "name": "raid_bdev1", 00:22:26.372 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:26.372 "strip_size_kb": 0, 00:22:26.372 "state": "online", 00:22:26.372 "raid_level": "raid1", 00:22:26.372 "superblock": true, 00:22:26.372 "num_base_bdevs": 4, 00:22:26.372 "num_base_bdevs_discovered": 3, 00:22:26.372 "num_base_bdevs_operational": 3, 00:22:26.372 "base_bdevs_list": [ 00:22:26.372 { 00:22:26.372 "name": "spare", 00:22:26.372 "uuid": "548e93f2-dadc-5dce-8cba-6a4715aafcb4", 00:22:26.372 "is_configured": true, 00:22:26.372 "data_offset": 2048, 00:22:26.372 "data_size": 63488 00:22:26.372 }, 00:22:26.372 { 00:22:26.372 "name": null, 00:22:26.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.372 "is_configured": false, 00:22:26.372 "data_offset": 2048, 00:22:26.372 "data_size": 63488 00:22:26.372 }, 00:22:26.372 { 00:22:26.372 "name": "BaseBdev3", 00:22:26.372 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:26.372 "is_configured": true, 00:22:26.372 "data_offset": 2048, 00:22:26.372 "data_size": 63488 00:22:26.372 }, 00:22:26.372 { 00:22:26.372 "name": "BaseBdev4", 00:22:26.372 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:26.372 "is_configured": true, 00:22:26.372 "data_offset": 2048, 00:22:26.372 "data_size": 63488 00:22:26.372 } 00:22:26.372 ] 00:22:26.372 }' 00:22:26.372 16:39:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:26.372 16:39:57 -- common/autotest_common.sh@10 -- # set +x 00:22:26.939 16:39:58 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:26.939 16:39:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:26.939 16:39:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:26.939 16:39:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:26.939 16:39:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:26.939 16:39:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.939 16:39:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.199 16:39:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:27.199 "name": "raid_bdev1", 00:22:27.199 "uuid": "5213f9e5-60b7-471c-aa94-c70525e881f4", 00:22:27.199 "strip_size_kb": 0, 00:22:27.199 "state": "online", 00:22:27.199 "raid_level": "raid1", 00:22:27.199 "superblock": true, 00:22:27.199 "num_base_bdevs": 4, 00:22:27.199 "num_base_bdevs_discovered": 3, 00:22:27.199 "num_base_bdevs_operational": 3, 00:22:27.199 "base_bdevs_list": [ 00:22:27.199 { 00:22:27.199 "name": "spare", 00:22:27.199 "uuid": "548e93f2-dadc-5dce-8cba-6a4715aafcb4", 00:22:27.199 "is_configured": true, 00:22:27.199 "data_offset": 2048, 00:22:27.199 "data_size": 63488 00:22:27.199 }, 00:22:27.199 { 00:22:27.199 "name": null, 00:22:27.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.199 "is_configured": false, 00:22:27.199 "data_offset": 2048, 00:22:27.199 "data_size": 63488 00:22:27.199 }, 00:22:27.199 { 00:22:27.199 "name": "BaseBdev3", 00:22:27.199 "uuid": "b9cd9ea1-7083-57ff-b5e4-df3ea3f0bf30", 00:22:27.199 "is_configured": true, 00:22:27.199 "data_offset": 2048, 00:22:27.199 "data_size": 63488 00:22:27.199 }, 00:22:27.199 { 00:22:27.199 "name": "BaseBdev4", 00:22:27.199 "uuid": "d62e9ce3-0de7-5ee1-b29d-d3b898072827", 00:22:27.199 "is_configured": true, 00:22:27.199 "data_offset": 2048, 00:22:27.199 "data_size": 63488 00:22:27.199 } 00:22:27.199 ] 00:22:27.199 }' 00:22:27.199 16:39:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:27.199 16:39:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:27.199 16:39:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:27.199 16:39:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:27.199 16:39:58 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:27.199 16:39:58 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.458 16:39:58 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.458 16:39:58 -- bdev/bdev_raid.sh@709 -- # killprocess 136876 00:22:27.458 16:39:58 -- common/autotest_common.sh@926 -- # '[' -z 136876 ']' 00:22:27.458 16:39:58 -- common/autotest_common.sh@930 -- # kill -0 136876 00:22:27.458 16:39:58 -- common/autotest_common.sh@931 -- # uname 00:22:27.458 16:39:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:27.458 16:39:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136876 00:22:27.458 16:39:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:27.458 16:39:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:27.458 16:39:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136876' 00:22:27.458 killing process with pid 136876 00:22:27.458 16:39:58 -- common/autotest_common.sh@945 -- # kill 136876 00:22:27.458 Received shutdown signal, test time was about 17.129883 seconds 00:22:27.458 00:22:27.458 Latency(us) 00:22:27.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.458 =================================================================================================================== 00:22:27.458 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.458 16:39:58 -- common/autotest_common.sh@950 -- # wait 136876 00:22:27.458 [2024-07-13 16:39:58.902373] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:27.458 [2024-07-13 16:39:58.902515] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:27.458 [2024-07-13 16:39:58.902637] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:27.458 [2024-07-13 16:39:58.902648] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:22:27.716 [2024-07-13 16:39:58.991180] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:27.975 16:39:59 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:27.975 00:22:27.975 real 0m22.892s 00:22:27.975 user 0m36.283s 00:22:27.975 sys 0m4.051s 00:22:27.975 16:39:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.975 16:39:59 -- common/autotest_common.sh@10 -- # set +x 00:22:27.976 ************************************ 00:22:27.976 END TEST raid_rebuild_test_sb_io 00:22:27.976 ************************************ 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:22:28.235 16:39:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:22:28.235 16:39:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:28.235 16:39:59 -- common/autotest_common.sh@10 -- # set +x 00:22:28.235 ************************************ 00:22:28.235 START TEST raid5f_state_function_test 00:22:28.235 ************************************ 00:22:28.235 16:39:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=137488 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:28.235 Process raid pid: 137488 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 137488' 00:22:28.235 16:39:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 137488 /var/tmp/spdk-raid.sock 00:22:28.235 16:39:59 -- common/autotest_common.sh@819 -- # '[' -z 137488 ']' 00:22:28.235 16:39:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:28.235 16:39:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:28.235 16:39:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:28.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:28.235 16:39:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:28.235 16:39:59 -- common/autotest_common.sh@10 -- # set +x 00:22:28.235 [2024-07-13 16:39:59.576906] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:28.235 [2024-07-13 16:39:59.577421] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.494 [2024-07-13 16:39:59.733674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.494 [2024-07-13 16:39:59.828030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.494 [2024-07-13 16:39:59.913917] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:29.060 16:40:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:29.060 16:40:00 -- common/autotest_common.sh@852 -- # return 0 00:22:29.060 16:40:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:29.319 [2024-07-13 16:40:00.704768] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:29.319 [2024-07-13 16:40:00.705110] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:29.319 [2024-07-13 16:40:00.705198] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:29.319 [2024-07-13 16:40:00.705264] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:29.319 [2024-07-13 16:40:00.705293] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:29.319 [2024-07-13 16:40:00.705368] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:29.319 16:40:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:29.319 16:40:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:29.319 16:40:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:29.319 16:40:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:29.319 16:40:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:29.319 16:40:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:29.319 16:40:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:29.319 16:40:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:29.319 16:40:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:29.319 16:40:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:29.319 16:40:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.319 16:40:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.579 16:40:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:29.579 "name": "Existed_Raid", 00:22:29.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.579 "strip_size_kb": 64, 00:22:29.579 "state": "configuring", 00:22:29.579 "raid_level": "raid5f", 00:22:29.579 "superblock": false, 00:22:29.579 "num_base_bdevs": 3, 00:22:29.579 "num_base_bdevs_discovered": 0, 00:22:29.579 "num_base_bdevs_operational": 3, 00:22:29.579 "base_bdevs_list": [ 00:22:29.579 { 00:22:29.579 "name": "BaseBdev1", 00:22:29.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.579 "is_configured": false, 00:22:29.579 "data_offset": 0, 00:22:29.579 "data_size": 0 00:22:29.579 }, 00:22:29.579 { 00:22:29.579 "name": "BaseBdev2", 00:22:29.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.579 "is_configured": false, 00:22:29.579 "data_offset": 0, 00:22:29.579 "data_size": 0 00:22:29.579 }, 00:22:29.579 { 00:22:29.579 "name": "BaseBdev3", 00:22:29.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.579 "is_configured": false, 00:22:29.579 "data_offset": 0, 00:22:29.579 "data_size": 0 00:22:29.579 } 00:22:29.579 ] 00:22:29.579 }' 00:22:29.579 16:40:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:29.579 16:40:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.168 16:40:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:30.425 [2024-07-13 16:40:01.724852] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:30.426 [2024-07-13 16:40:01.725213] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:22:30.426 16:40:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:30.683 [2024-07-13 16:40:02.000936] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:30.683 [2024-07-13 16:40:02.001332] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:30.683 [2024-07-13 16:40:02.001436] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:30.683 [2024-07-13 16:40:02.001505] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:30.683 [2024-07-13 16:40:02.001546] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:30.683 [2024-07-13 16:40:02.001597] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:30.683 16:40:02 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:30.939 [2024-07-13 16:40:02.221644] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:30.939 BaseBdev1 00:22:30.939 16:40:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:30.939 16:40:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:22:30.939 16:40:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:30.939 16:40:02 -- common/autotest_common.sh@889 -- # local i 00:22:30.939 16:40:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:30.939 16:40:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:30.939 16:40:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:31.196 16:40:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:31.494 [ 00:22:31.495 { 00:22:31.495 "name": "BaseBdev1", 00:22:31.495 "aliases": [ 00:22:31.495 "0a476d1c-f13c-4ea6-813c-e83cf099be28" 00:22:31.495 ], 00:22:31.495 "product_name": "Malloc disk", 00:22:31.495 "block_size": 512, 00:22:31.495 "num_blocks": 65536, 00:22:31.495 "uuid": "0a476d1c-f13c-4ea6-813c-e83cf099be28", 00:22:31.495 "assigned_rate_limits": { 00:22:31.495 "rw_ios_per_sec": 0, 00:22:31.495 "rw_mbytes_per_sec": 0, 00:22:31.495 "r_mbytes_per_sec": 0, 00:22:31.495 "w_mbytes_per_sec": 0 00:22:31.495 }, 00:22:31.495 "claimed": true, 00:22:31.495 "claim_type": "exclusive_write", 00:22:31.495 "zoned": false, 00:22:31.495 "supported_io_types": { 00:22:31.495 "read": true, 00:22:31.495 "write": true, 00:22:31.495 "unmap": true, 00:22:31.495 "write_zeroes": true, 00:22:31.495 "flush": true, 00:22:31.495 "reset": true, 00:22:31.495 "compare": false, 00:22:31.495 "compare_and_write": false, 00:22:31.495 "abort": true, 00:22:31.495 "nvme_admin": false, 00:22:31.495 "nvme_io": false 00:22:31.495 }, 00:22:31.495 "memory_domains": [ 00:22:31.495 { 00:22:31.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.495 "dma_device_type": 2 00:22:31.495 } 00:22:31.495 ], 00:22:31.495 "driver_specific": {} 00:22:31.495 } 00:22:31.495 ] 00:22:31.495 16:40:02 -- common/autotest_common.sh@895 -- # return 0 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:31.495 "name": "Existed_Raid", 00:22:31.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.495 "strip_size_kb": 64, 00:22:31.495 "state": "configuring", 00:22:31.495 "raid_level": "raid5f", 00:22:31.495 "superblock": false, 00:22:31.495 "num_base_bdevs": 3, 00:22:31.495 "num_base_bdevs_discovered": 1, 00:22:31.495 "num_base_bdevs_operational": 3, 00:22:31.495 "base_bdevs_list": [ 00:22:31.495 { 00:22:31.495 "name": "BaseBdev1", 00:22:31.495 "uuid": "0a476d1c-f13c-4ea6-813c-e83cf099be28", 00:22:31.495 "is_configured": true, 00:22:31.495 "data_offset": 0, 00:22:31.495 "data_size": 65536 00:22:31.495 }, 00:22:31.495 { 00:22:31.495 "name": "BaseBdev2", 00:22:31.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.495 "is_configured": false, 00:22:31.495 "data_offset": 0, 00:22:31.495 "data_size": 0 00:22:31.495 }, 00:22:31.495 { 00:22:31.495 "name": "BaseBdev3", 00:22:31.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.495 "is_configured": false, 00:22:31.495 "data_offset": 0, 00:22:31.495 "data_size": 0 00:22:31.495 } 00:22:31.495 ] 00:22:31.495 }' 00:22:31.495 16:40:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:31.495 16:40:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.060 16:40:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:32.318 [2024-07-13 16:40:03.665938] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:32.318 [2024-07-13 16:40:03.666225] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:22:32.318 16:40:03 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:22:32.318 16:40:03 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:32.576 [2024-07-13 16:40:03.926132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:32.576 [2024-07-13 16:40:03.928837] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:32.576 [2024-07-13 16:40:03.929033] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:32.576 [2024-07-13 16:40:03.929113] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:32.576 [2024-07-13 16:40:03.929170] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.576 16:40:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.834 16:40:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:32.834 "name": "Existed_Raid", 00:22:32.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.834 "strip_size_kb": 64, 00:22:32.834 "state": "configuring", 00:22:32.834 "raid_level": "raid5f", 00:22:32.834 "superblock": false, 00:22:32.834 "num_base_bdevs": 3, 00:22:32.834 "num_base_bdevs_discovered": 1, 00:22:32.834 "num_base_bdevs_operational": 3, 00:22:32.834 "base_bdevs_list": [ 00:22:32.834 { 00:22:32.834 "name": "BaseBdev1", 00:22:32.834 "uuid": "0a476d1c-f13c-4ea6-813c-e83cf099be28", 00:22:32.834 "is_configured": true, 00:22:32.834 "data_offset": 0, 00:22:32.834 "data_size": 65536 00:22:32.834 }, 00:22:32.834 { 00:22:32.834 "name": "BaseBdev2", 00:22:32.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.834 "is_configured": false, 00:22:32.834 "data_offset": 0, 00:22:32.834 "data_size": 0 00:22:32.834 }, 00:22:32.834 { 00:22:32.834 "name": "BaseBdev3", 00:22:32.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.834 "is_configured": false, 00:22:32.834 "data_offset": 0, 00:22:32.834 "data_size": 0 00:22:32.834 } 00:22:32.834 ] 00:22:32.834 }' 00:22:32.834 16:40:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:32.834 16:40:04 -- common/autotest_common.sh@10 -- # set +x 00:22:33.401 16:40:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:33.660 [2024-07-13 16:40:04.909982] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:33.660 BaseBdev2 00:22:33.660 16:40:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:33.660 16:40:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:22:33.660 16:40:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:33.660 16:40:04 -- common/autotest_common.sh@889 -- # local i 00:22:33.660 16:40:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:33.660 16:40:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:33.660 16:40:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:33.918 16:40:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:34.177 [ 00:22:34.177 { 00:22:34.177 "name": "BaseBdev2", 00:22:34.177 "aliases": [ 00:22:34.177 "db87f769-8603-4597-8cf2-a846be3d41dc" 00:22:34.177 ], 00:22:34.177 "product_name": "Malloc disk", 00:22:34.177 "block_size": 512, 00:22:34.177 "num_blocks": 65536, 00:22:34.177 "uuid": "db87f769-8603-4597-8cf2-a846be3d41dc", 00:22:34.177 "assigned_rate_limits": { 00:22:34.177 "rw_ios_per_sec": 0, 00:22:34.177 "rw_mbytes_per_sec": 0, 00:22:34.177 "r_mbytes_per_sec": 0, 00:22:34.177 "w_mbytes_per_sec": 0 00:22:34.177 }, 00:22:34.177 "claimed": true, 00:22:34.177 "claim_type": "exclusive_write", 00:22:34.177 "zoned": false, 00:22:34.177 "supported_io_types": { 00:22:34.177 "read": true, 00:22:34.177 "write": true, 00:22:34.177 "unmap": true, 00:22:34.177 "write_zeroes": true, 00:22:34.177 "flush": true, 00:22:34.177 "reset": true, 00:22:34.177 "compare": false, 00:22:34.177 "compare_and_write": false, 00:22:34.177 "abort": true, 00:22:34.177 "nvme_admin": false, 00:22:34.177 "nvme_io": false 00:22:34.177 }, 00:22:34.177 "memory_domains": [ 00:22:34.177 { 00:22:34.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.177 "dma_device_type": 2 00:22:34.177 } 00:22:34.177 ], 00:22:34.177 "driver_specific": {} 00:22:34.177 } 00:22:34.177 ] 00:22:34.177 16:40:05 -- common/autotest_common.sh@895 -- # return 0 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.177 16:40:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.436 16:40:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:34.436 "name": "Existed_Raid", 00:22:34.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.436 "strip_size_kb": 64, 00:22:34.436 "state": "configuring", 00:22:34.436 "raid_level": "raid5f", 00:22:34.436 "superblock": false, 00:22:34.436 "num_base_bdevs": 3, 00:22:34.436 "num_base_bdevs_discovered": 2, 00:22:34.436 "num_base_bdevs_operational": 3, 00:22:34.436 "base_bdevs_list": [ 00:22:34.436 { 00:22:34.436 "name": "BaseBdev1", 00:22:34.436 "uuid": "0a476d1c-f13c-4ea6-813c-e83cf099be28", 00:22:34.436 "is_configured": true, 00:22:34.436 "data_offset": 0, 00:22:34.436 "data_size": 65536 00:22:34.436 }, 00:22:34.436 { 00:22:34.436 "name": "BaseBdev2", 00:22:34.436 "uuid": "db87f769-8603-4597-8cf2-a846be3d41dc", 00:22:34.436 "is_configured": true, 00:22:34.436 "data_offset": 0, 00:22:34.436 "data_size": 65536 00:22:34.436 }, 00:22:34.436 { 00:22:34.436 "name": "BaseBdev3", 00:22:34.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.436 "is_configured": false, 00:22:34.436 "data_offset": 0, 00:22:34.436 "data_size": 0 00:22:34.436 } 00:22:34.436 ] 00:22:34.436 }' 00:22:34.436 16:40:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:34.436 16:40:05 -- common/autotest_common.sh@10 -- # set +x 00:22:35.003 16:40:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:35.262 [2024-07-13 16:40:06.479906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:35.262 [2024-07-13 16:40:06.480321] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:22:35.262 [2024-07-13 16:40:06.480370] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:35.262 [2024-07-13 16:40:06.480613] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:22:35.262 [2024-07-13 16:40:06.481580] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:22:35.262 [2024-07-13 16:40:06.481707] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:22:35.262 [2024-07-13 16:40:06.482058] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.262 BaseBdev3 00:22:35.262 16:40:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:35.262 16:40:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:22:35.262 16:40:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:35.262 16:40:06 -- common/autotest_common.sh@889 -- # local i 00:22:35.262 16:40:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:35.262 16:40:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:35.262 16:40:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:35.262 16:40:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:35.521 [ 00:22:35.521 { 00:22:35.521 "name": "BaseBdev3", 00:22:35.521 "aliases": [ 00:22:35.521 "131cfa92-544e-4f92-a22f-683af0f5b3c0" 00:22:35.521 ], 00:22:35.521 "product_name": "Malloc disk", 00:22:35.521 "block_size": 512, 00:22:35.521 "num_blocks": 65536, 00:22:35.521 "uuid": "131cfa92-544e-4f92-a22f-683af0f5b3c0", 00:22:35.521 "assigned_rate_limits": { 00:22:35.521 "rw_ios_per_sec": 0, 00:22:35.521 "rw_mbytes_per_sec": 0, 00:22:35.521 "r_mbytes_per_sec": 0, 00:22:35.521 "w_mbytes_per_sec": 0 00:22:35.521 }, 00:22:35.521 "claimed": true, 00:22:35.521 "claim_type": "exclusive_write", 00:22:35.521 "zoned": false, 00:22:35.521 "supported_io_types": { 00:22:35.521 "read": true, 00:22:35.521 "write": true, 00:22:35.521 "unmap": true, 00:22:35.521 "write_zeroes": true, 00:22:35.521 "flush": true, 00:22:35.521 "reset": true, 00:22:35.521 "compare": false, 00:22:35.521 "compare_and_write": false, 00:22:35.521 "abort": true, 00:22:35.521 "nvme_admin": false, 00:22:35.521 "nvme_io": false 00:22:35.521 }, 00:22:35.521 "memory_domains": [ 00:22:35.521 { 00:22:35.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.521 "dma_device_type": 2 00:22:35.521 } 00:22:35.521 ], 00:22:35.521 "driver_specific": {} 00:22:35.521 } 00:22:35.521 ] 00:22:35.521 16:40:06 -- common/autotest_common.sh@895 -- # return 0 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.521 16:40:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.780 16:40:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:35.780 "name": "Existed_Raid", 00:22:35.780 "uuid": "04b6ca30-43bd-4951-9e11-ce364d06a0a6", 00:22:35.780 "strip_size_kb": 64, 00:22:35.780 "state": "online", 00:22:35.780 "raid_level": "raid5f", 00:22:35.780 "superblock": false, 00:22:35.780 "num_base_bdevs": 3, 00:22:35.780 "num_base_bdevs_discovered": 3, 00:22:35.780 "num_base_bdevs_operational": 3, 00:22:35.780 "base_bdevs_list": [ 00:22:35.780 { 00:22:35.780 "name": "BaseBdev1", 00:22:35.780 "uuid": "0a476d1c-f13c-4ea6-813c-e83cf099be28", 00:22:35.780 "is_configured": true, 00:22:35.780 "data_offset": 0, 00:22:35.780 "data_size": 65536 00:22:35.780 }, 00:22:35.780 { 00:22:35.780 "name": "BaseBdev2", 00:22:35.780 "uuid": "db87f769-8603-4597-8cf2-a846be3d41dc", 00:22:35.780 "is_configured": true, 00:22:35.780 "data_offset": 0, 00:22:35.780 "data_size": 65536 00:22:35.780 }, 00:22:35.780 { 00:22:35.780 "name": "BaseBdev3", 00:22:35.780 "uuid": "131cfa92-544e-4f92-a22f-683af0f5b3c0", 00:22:35.780 "is_configured": true, 00:22:35.780 "data_offset": 0, 00:22:35.780 "data_size": 65536 00:22:35.780 } 00:22:35.780 ] 00:22:35.780 }' 00:22:35.780 16:40:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:35.780 16:40:07 -- common/autotest_common.sh@10 -- # set +x 00:22:36.347 16:40:07 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:36.605 [2024-07-13 16:40:07.998721] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.606 16:40:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.865 16:40:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:36.865 "name": "Existed_Raid", 00:22:36.865 "uuid": "04b6ca30-43bd-4951-9e11-ce364d06a0a6", 00:22:36.865 "strip_size_kb": 64, 00:22:36.865 "state": "online", 00:22:36.865 "raid_level": "raid5f", 00:22:36.865 "superblock": false, 00:22:36.865 "num_base_bdevs": 3, 00:22:36.865 "num_base_bdevs_discovered": 2, 00:22:36.865 "num_base_bdevs_operational": 2, 00:22:36.865 "base_bdevs_list": [ 00:22:36.865 { 00:22:36.865 "name": null, 00:22:36.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.865 "is_configured": false, 00:22:36.865 "data_offset": 0, 00:22:36.865 "data_size": 65536 00:22:36.865 }, 00:22:36.865 { 00:22:36.865 "name": "BaseBdev2", 00:22:36.865 "uuid": "db87f769-8603-4597-8cf2-a846be3d41dc", 00:22:36.865 "is_configured": true, 00:22:36.865 "data_offset": 0, 00:22:36.865 "data_size": 65536 00:22:36.865 }, 00:22:36.865 { 00:22:36.865 "name": "BaseBdev3", 00:22:36.865 "uuid": "131cfa92-544e-4f92-a22f-683af0f5b3c0", 00:22:36.865 "is_configured": true, 00:22:36.865 "data_offset": 0, 00:22:36.865 "data_size": 65536 00:22:36.865 } 00:22:36.865 ] 00:22:36.865 }' 00:22:36.865 16:40:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:36.865 16:40:08 -- common/autotest_common.sh@10 -- # set +x 00:22:37.800 16:40:08 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:37.800 16:40:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:37.800 16:40:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.800 16:40:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:37.800 16:40:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:37.800 16:40:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:37.800 16:40:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:38.058 [2024-07-13 16:40:09.452639] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:38.058 [2024-07-13 16:40:09.452895] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:38.058 [2024-07-13 16:40:09.453109] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:38.058 16:40:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:38.058 16:40:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:38.058 16:40:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.058 16:40:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:38.316 16:40:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:38.316 16:40:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:38.316 16:40:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:38.575 [2024-07-13 16:40:09.986852] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:38.576 [2024-07-13 16:40:09.987210] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:22:38.576 16:40:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:38.576 16:40:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:38.576 16:40:10 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.576 16:40:10 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:38.834 16:40:10 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:38.834 16:40:10 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:38.834 16:40:10 -- bdev/bdev_raid.sh@287 -- # killprocess 137488 00:22:38.834 16:40:10 -- common/autotest_common.sh@926 -- # '[' -z 137488 ']' 00:22:38.834 16:40:10 -- common/autotest_common.sh@930 -- # kill -0 137488 00:22:38.834 16:40:10 -- common/autotest_common.sh@931 -- # uname 00:22:38.834 16:40:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:38.834 16:40:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137488 00:22:38.834 16:40:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:38.834 16:40:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:38.834 16:40:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137488' 00:22:38.834 killing process with pid 137488 00:22:38.834 16:40:10 -- common/autotest_common.sh@945 -- # kill 137488 00:22:38.834 16:40:10 -- common/autotest_common.sh@950 -- # wait 137488 00:22:38.834 [2024-07-13 16:40:10.255502] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:38.834 [2024-07-13 16:40:10.255602] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:39.403 00:22:39.403 real 0m11.158s 00:22:39.403 user 0m19.605s 00:22:39.403 sys 0m2.075s 00:22:39.403 16:40:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.403 16:40:10 -- common/autotest_common.sh@10 -- # set +x 00:22:39.403 ************************************ 00:22:39.403 END TEST raid5f_state_function_test 00:22:39.403 ************************************ 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:22:39.403 16:40:10 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:22:39.403 16:40:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:39.403 16:40:10 -- common/autotest_common.sh@10 -- # set +x 00:22:39.403 ************************************ 00:22:39.403 START TEST raid5f_state_function_test_sb 00:22:39.403 ************************************ 00:22:39.403 16:40:10 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@226 -- # raid_pid=137849 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 137849' 00:22:39.403 Process raid pid: 137849 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:39.403 16:40:10 -- bdev/bdev_raid.sh@228 -- # waitforlisten 137849 /var/tmp/spdk-raid.sock 00:22:39.403 16:40:10 -- common/autotest_common.sh@819 -- # '[' -z 137849 ']' 00:22:39.403 16:40:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:39.403 16:40:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:39.403 16:40:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:39.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:39.403 16:40:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:39.403 16:40:10 -- common/autotest_common.sh@10 -- # set +x 00:22:39.403 [2024-07-13 16:40:10.790257] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:39.403 [2024-07-13 16:40:10.791244] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.662 [2024-07-13 16:40:10.938723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.662 [2024-07-13 16:40:11.026734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.662 [2024-07-13 16:40:11.108450] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.599 16:40:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:40.599 16:40:11 -- common/autotest_common.sh@852 -- # return 0 00:22:40.599 16:40:11 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:40.599 [2024-07-13 16:40:12.019167] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:40.599 [2024-07-13 16:40:12.019525] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:40.599 [2024-07-13 16:40:12.019616] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:40.599 [2024-07-13 16:40:12.019674] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:40.599 [2024-07-13 16:40:12.019703] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:40.599 [2024-07-13 16:40:12.019780] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:40.599 16:40:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:40.599 16:40:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:40.599 16:40:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:40.599 16:40:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:40.599 16:40:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:40.599 16:40:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:40.599 16:40:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:40.599 16:40:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:40.599 16:40:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:40.599 16:40:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:40.599 16:40:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.599 16:40:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.862 16:40:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:40.862 "name": "Existed_Raid", 00:22:40.862 "uuid": "793bbcfa-187c-4568-aec4-a87bd8756abb", 00:22:40.862 "strip_size_kb": 64, 00:22:40.862 "state": "configuring", 00:22:40.862 "raid_level": "raid5f", 00:22:40.862 "superblock": true, 00:22:40.862 "num_base_bdevs": 3, 00:22:40.862 "num_base_bdevs_discovered": 0, 00:22:40.862 "num_base_bdevs_operational": 3, 00:22:40.862 "base_bdevs_list": [ 00:22:40.862 { 00:22:40.862 "name": "BaseBdev1", 00:22:40.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.862 "is_configured": false, 00:22:40.862 "data_offset": 0, 00:22:40.862 "data_size": 0 00:22:40.862 }, 00:22:40.862 { 00:22:40.862 "name": "BaseBdev2", 00:22:40.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.862 "is_configured": false, 00:22:40.862 "data_offset": 0, 00:22:40.862 "data_size": 0 00:22:40.862 }, 00:22:40.862 { 00:22:40.862 "name": "BaseBdev3", 00:22:40.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.862 "is_configured": false, 00:22:40.862 "data_offset": 0, 00:22:40.862 "data_size": 0 00:22:40.862 } 00:22:40.862 ] 00:22:40.862 }' 00:22:40.862 16:40:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:40.862 16:40:12 -- common/autotest_common.sh@10 -- # set +x 00:22:41.430 16:40:12 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:41.687 [2024-07-13 16:40:13.107191] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:41.687 [2024-07-13 16:40:13.107518] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:22:41.687 16:40:13 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:41.945 [2024-07-13 16:40:13.299290] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:41.945 [2024-07-13 16:40:13.299552] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:41.945 [2024-07-13 16:40:13.299631] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:41.945 [2024-07-13 16:40:13.299690] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:41.945 [2024-07-13 16:40:13.299716] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:41.945 [2024-07-13 16:40:13.299763] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:41.945 16:40:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:42.203 [2024-07-13 16:40:13.495528] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:42.203 BaseBdev1 00:22:42.203 16:40:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:42.203 16:40:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:22:42.203 16:40:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:42.203 16:40:13 -- common/autotest_common.sh@889 -- # local i 00:22:42.203 16:40:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:42.203 16:40:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:42.203 16:40:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:42.468 16:40:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:42.727 [ 00:22:42.727 { 00:22:42.727 "name": "BaseBdev1", 00:22:42.727 "aliases": [ 00:22:42.727 "69efc388-9cd1-42ba-a13b-800abe0d566d" 00:22:42.727 ], 00:22:42.727 "product_name": "Malloc disk", 00:22:42.727 "block_size": 512, 00:22:42.727 "num_blocks": 65536, 00:22:42.727 "uuid": "69efc388-9cd1-42ba-a13b-800abe0d566d", 00:22:42.727 "assigned_rate_limits": { 00:22:42.727 "rw_ios_per_sec": 0, 00:22:42.727 "rw_mbytes_per_sec": 0, 00:22:42.727 "r_mbytes_per_sec": 0, 00:22:42.727 "w_mbytes_per_sec": 0 00:22:42.727 }, 00:22:42.727 "claimed": true, 00:22:42.727 "claim_type": "exclusive_write", 00:22:42.727 "zoned": false, 00:22:42.727 "supported_io_types": { 00:22:42.727 "read": true, 00:22:42.727 "write": true, 00:22:42.727 "unmap": true, 00:22:42.727 "write_zeroes": true, 00:22:42.727 "flush": true, 00:22:42.727 "reset": true, 00:22:42.727 "compare": false, 00:22:42.727 "compare_and_write": false, 00:22:42.727 "abort": true, 00:22:42.727 "nvme_admin": false, 00:22:42.727 "nvme_io": false 00:22:42.727 }, 00:22:42.727 "memory_domains": [ 00:22:42.727 { 00:22:42.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.727 "dma_device_type": 2 00:22:42.727 } 00:22:42.727 ], 00:22:42.727 "driver_specific": {} 00:22:42.727 } 00:22:42.727 ] 00:22:42.727 16:40:13 -- common/autotest_common.sh@895 -- # return 0 00:22:42.727 16:40:13 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:42.727 16:40:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:42.727 16:40:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:42.727 16:40:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:42.727 16:40:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:42.727 16:40:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:42.727 16:40:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:42.727 16:40:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:42.728 16:40:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:42.728 16:40:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:42.728 16:40:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.728 16:40:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.986 16:40:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:42.986 "name": "Existed_Raid", 00:22:42.986 "uuid": "e4b47fad-2bc2-44d6-ba64-73bc29801844", 00:22:42.986 "strip_size_kb": 64, 00:22:42.986 "state": "configuring", 00:22:42.986 "raid_level": "raid5f", 00:22:42.986 "superblock": true, 00:22:42.986 "num_base_bdevs": 3, 00:22:42.986 "num_base_bdevs_discovered": 1, 00:22:42.987 "num_base_bdevs_operational": 3, 00:22:42.987 "base_bdevs_list": [ 00:22:42.987 { 00:22:42.987 "name": "BaseBdev1", 00:22:42.987 "uuid": "69efc388-9cd1-42ba-a13b-800abe0d566d", 00:22:42.987 "is_configured": true, 00:22:42.987 "data_offset": 2048, 00:22:42.987 "data_size": 63488 00:22:42.987 }, 00:22:42.987 { 00:22:42.987 "name": "BaseBdev2", 00:22:42.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.987 "is_configured": false, 00:22:42.987 "data_offset": 0, 00:22:42.987 "data_size": 0 00:22:42.987 }, 00:22:42.987 { 00:22:42.987 "name": "BaseBdev3", 00:22:42.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.987 "is_configured": false, 00:22:42.987 "data_offset": 0, 00:22:42.987 "data_size": 0 00:22:42.987 } 00:22:42.987 ] 00:22:42.987 }' 00:22:42.987 16:40:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:42.987 16:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:43.555 16:40:14 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:43.555 [2024-07-13 16:40:14.947851] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:43.555 [2024-07-13 16:40:14.948123] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:22:43.555 16:40:14 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:22:43.556 16:40:14 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:43.814 16:40:15 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:44.073 BaseBdev1 00:22:44.073 16:40:15 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:22:44.073 16:40:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:22:44.073 16:40:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:44.073 16:40:15 -- common/autotest_common.sh@889 -- # local i 00:22:44.073 16:40:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:44.073 16:40:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:44.073 16:40:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:44.332 16:40:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:44.332 [ 00:22:44.332 { 00:22:44.332 "name": "BaseBdev1", 00:22:44.332 "aliases": [ 00:22:44.332 "67c94b6b-11e5-4427-a98b-2ebd6b1af43a" 00:22:44.332 ], 00:22:44.332 "product_name": "Malloc disk", 00:22:44.332 "block_size": 512, 00:22:44.332 "num_blocks": 65536, 00:22:44.332 "uuid": "67c94b6b-11e5-4427-a98b-2ebd6b1af43a", 00:22:44.332 "assigned_rate_limits": { 00:22:44.332 "rw_ios_per_sec": 0, 00:22:44.332 "rw_mbytes_per_sec": 0, 00:22:44.332 "r_mbytes_per_sec": 0, 00:22:44.332 "w_mbytes_per_sec": 0 00:22:44.332 }, 00:22:44.332 "claimed": false, 00:22:44.332 "zoned": false, 00:22:44.332 "supported_io_types": { 00:22:44.332 "read": true, 00:22:44.332 "write": true, 00:22:44.332 "unmap": true, 00:22:44.332 "write_zeroes": true, 00:22:44.332 "flush": true, 00:22:44.332 "reset": true, 00:22:44.332 "compare": false, 00:22:44.332 "compare_and_write": false, 00:22:44.332 "abort": true, 00:22:44.332 "nvme_admin": false, 00:22:44.332 "nvme_io": false 00:22:44.332 }, 00:22:44.332 "memory_domains": [ 00:22:44.332 { 00:22:44.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.332 "dma_device_type": 2 00:22:44.332 } 00:22:44.332 ], 00:22:44.332 "driver_specific": {} 00:22:44.332 } 00:22:44.332 ] 00:22:44.332 16:40:15 -- common/autotest_common.sh@895 -- # return 0 00:22:44.332 16:40:15 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:44.591 [2024-07-13 16:40:15.913142] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:44.591 [2024-07-13 16:40:15.916795] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:44.591 [2024-07-13 16:40:15.917002] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:44.591 [2024-07-13 16:40:15.917092] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:44.592 [2024-07-13 16:40:15.917163] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.592 16:40:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.850 16:40:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:44.850 "name": "Existed_Raid", 00:22:44.850 "uuid": "6209ed1d-ec3f-4b70-8360-9df2ce78e011", 00:22:44.850 "strip_size_kb": 64, 00:22:44.850 "state": "configuring", 00:22:44.850 "raid_level": "raid5f", 00:22:44.850 "superblock": true, 00:22:44.850 "num_base_bdevs": 3, 00:22:44.850 "num_base_bdevs_discovered": 1, 00:22:44.850 "num_base_bdevs_operational": 3, 00:22:44.850 "base_bdevs_list": [ 00:22:44.850 { 00:22:44.850 "name": "BaseBdev1", 00:22:44.850 "uuid": "67c94b6b-11e5-4427-a98b-2ebd6b1af43a", 00:22:44.850 "is_configured": true, 00:22:44.850 "data_offset": 2048, 00:22:44.850 "data_size": 63488 00:22:44.850 }, 00:22:44.850 { 00:22:44.850 "name": "BaseBdev2", 00:22:44.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.850 "is_configured": false, 00:22:44.850 "data_offset": 0, 00:22:44.850 "data_size": 0 00:22:44.850 }, 00:22:44.850 { 00:22:44.850 "name": "BaseBdev3", 00:22:44.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.850 "is_configured": false, 00:22:44.850 "data_offset": 0, 00:22:44.850 "data_size": 0 00:22:44.850 } 00:22:44.850 ] 00:22:44.850 }' 00:22:44.850 16:40:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:44.850 16:40:16 -- common/autotest_common.sh@10 -- # set +x 00:22:45.416 16:40:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:45.675 [2024-07-13 16:40:16.962279] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:45.675 BaseBdev2 00:22:45.675 16:40:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:45.675 16:40:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:22:45.675 16:40:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:45.675 16:40:16 -- common/autotest_common.sh@889 -- # local i 00:22:45.675 16:40:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:45.675 16:40:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:45.675 16:40:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:45.933 16:40:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:46.191 [ 00:22:46.191 { 00:22:46.191 "name": "BaseBdev2", 00:22:46.191 "aliases": [ 00:22:46.191 "e3de22f1-4ece-4962-afa8-7ce0d7e21a4b" 00:22:46.191 ], 00:22:46.191 "product_name": "Malloc disk", 00:22:46.191 "block_size": 512, 00:22:46.191 "num_blocks": 65536, 00:22:46.191 "uuid": "e3de22f1-4ece-4962-afa8-7ce0d7e21a4b", 00:22:46.191 "assigned_rate_limits": { 00:22:46.191 "rw_ios_per_sec": 0, 00:22:46.191 "rw_mbytes_per_sec": 0, 00:22:46.191 "r_mbytes_per_sec": 0, 00:22:46.191 "w_mbytes_per_sec": 0 00:22:46.191 }, 00:22:46.191 "claimed": true, 00:22:46.191 "claim_type": "exclusive_write", 00:22:46.191 "zoned": false, 00:22:46.191 "supported_io_types": { 00:22:46.191 "read": true, 00:22:46.191 "write": true, 00:22:46.191 "unmap": true, 00:22:46.191 "write_zeroes": true, 00:22:46.191 "flush": true, 00:22:46.191 "reset": true, 00:22:46.191 "compare": false, 00:22:46.191 "compare_and_write": false, 00:22:46.191 "abort": true, 00:22:46.191 "nvme_admin": false, 00:22:46.191 "nvme_io": false 00:22:46.191 }, 00:22:46.191 "memory_domains": [ 00:22:46.191 { 00:22:46.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.191 "dma_device_type": 2 00:22:46.191 } 00:22:46.191 ], 00:22:46.191 "driver_specific": {} 00:22:46.191 } 00:22:46.191 ] 00:22:46.191 16:40:17 -- common/autotest_common.sh@895 -- # return 0 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.191 16:40:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.449 16:40:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:46.449 "name": "Existed_Raid", 00:22:46.449 "uuid": "6209ed1d-ec3f-4b70-8360-9df2ce78e011", 00:22:46.449 "strip_size_kb": 64, 00:22:46.449 "state": "configuring", 00:22:46.449 "raid_level": "raid5f", 00:22:46.449 "superblock": true, 00:22:46.449 "num_base_bdevs": 3, 00:22:46.449 "num_base_bdevs_discovered": 2, 00:22:46.449 "num_base_bdevs_operational": 3, 00:22:46.449 "base_bdevs_list": [ 00:22:46.449 { 00:22:46.449 "name": "BaseBdev1", 00:22:46.449 "uuid": "67c94b6b-11e5-4427-a98b-2ebd6b1af43a", 00:22:46.449 "is_configured": true, 00:22:46.449 "data_offset": 2048, 00:22:46.449 "data_size": 63488 00:22:46.449 }, 00:22:46.449 { 00:22:46.449 "name": "BaseBdev2", 00:22:46.449 "uuid": "e3de22f1-4ece-4962-afa8-7ce0d7e21a4b", 00:22:46.449 "is_configured": true, 00:22:46.449 "data_offset": 2048, 00:22:46.449 "data_size": 63488 00:22:46.449 }, 00:22:46.449 { 00:22:46.449 "name": "BaseBdev3", 00:22:46.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.449 "is_configured": false, 00:22:46.449 "data_offset": 0, 00:22:46.449 "data_size": 0 00:22:46.449 } 00:22:46.449 ] 00:22:46.449 }' 00:22:46.449 16:40:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:46.449 16:40:17 -- common/autotest_common.sh@10 -- # set +x 00:22:47.015 16:40:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:47.274 [2024-07-13 16:40:18.554158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:47.274 [2024-07-13 16:40:18.554746] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:22:47.274 [2024-07-13 16:40:18.554893] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:47.274 [2024-07-13 16:40:18.555112] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:22:47.274 [2024-07-13 16:40:18.555952] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:22:47.274 [2024-07-13 16:40:18.556069] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:22:47.274 BaseBdev3 00:22:47.274 [2024-07-13 16:40:18.556364] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.274 16:40:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:47.274 16:40:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:22:47.274 16:40:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:47.274 16:40:18 -- common/autotest_common.sh@889 -- # local i 00:22:47.274 16:40:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:47.274 16:40:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:47.274 16:40:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:47.533 16:40:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:47.791 [ 00:22:47.791 { 00:22:47.791 "name": "BaseBdev3", 00:22:47.791 "aliases": [ 00:22:47.791 "fc6a0fc9-f0f9-4b2e-9d9e-3b19530d4390" 00:22:47.791 ], 00:22:47.791 "product_name": "Malloc disk", 00:22:47.791 "block_size": 512, 00:22:47.791 "num_blocks": 65536, 00:22:47.791 "uuid": "fc6a0fc9-f0f9-4b2e-9d9e-3b19530d4390", 00:22:47.791 "assigned_rate_limits": { 00:22:47.791 "rw_ios_per_sec": 0, 00:22:47.791 "rw_mbytes_per_sec": 0, 00:22:47.791 "r_mbytes_per_sec": 0, 00:22:47.791 "w_mbytes_per_sec": 0 00:22:47.791 }, 00:22:47.791 "claimed": true, 00:22:47.791 "claim_type": "exclusive_write", 00:22:47.791 "zoned": false, 00:22:47.791 "supported_io_types": { 00:22:47.791 "read": true, 00:22:47.791 "write": true, 00:22:47.791 "unmap": true, 00:22:47.791 "write_zeroes": true, 00:22:47.791 "flush": true, 00:22:47.791 "reset": true, 00:22:47.791 "compare": false, 00:22:47.791 "compare_and_write": false, 00:22:47.791 "abort": true, 00:22:47.791 "nvme_admin": false, 00:22:47.791 "nvme_io": false 00:22:47.791 }, 00:22:47.791 "memory_domains": [ 00:22:47.791 { 00:22:47.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.791 "dma_device_type": 2 00:22:47.791 } 00:22:47.791 ], 00:22:47.791 "driver_specific": {} 00:22:47.791 } 00:22:47.791 ] 00:22:47.791 16:40:19 -- common/autotest_common.sh@895 -- # return 0 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.791 16:40:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.049 16:40:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:48.049 "name": "Existed_Raid", 00:22:48.049 "uuid": "6209ed1d-ec3f-4b70-8360-9df2ce78e011", 00:22:48.049 "strip_size_kb": 64, 00:22:48.049 "state": "online", 00:22:48.049 "raid_level": "raid5f", 00:22:48.049 "superblock": true, 00:22:48.049 "num_base_bdevs": 3, 00:22:48.049 "num_base_bdevs_discovered": 3, 00:22:48.049 "num_base_bdevs_operational": 3, 00:22:48.049 "base_bdevs_list": [ 00:22:48.049 { 00:22:48.049 "name": "BaseBdev1", 00:22:48.049 "uuid": "67c94b6b-11e5-4427-a98b-2ebd6b1af43a", 00:22:48.049 "is_configured": true, 00:22:48.049 "data_offset": 2048, 00:22:48.049 "data_size": 63488 00:22:48.049 }, 00:22:48.049 { 00:22:48.049 "name": "BaseBdev2", 00:22:48.049 "uuid": "e3de22f1-4ece-4962-afa8-7ce0d7e21a4b", 00:22:48.049 "is_configured": true, 00:22:48.049 "data_offset": 2048, 00:22:48.049 "data_size": 63488 00:22:48.049 }, 00:22:48.049 { 00:22:48.049 "name": "BaseBdev3", 00:22:48.049 "uuid": "fc6a0fc9-f0f9-4b2e-9d9e-3b19530d4390", 00:22:48.049 "is_configured": true, 00:22:48.049 "data_offset": 2048, 00:22:48.049 "data_size": 63488 00:22:48.049 } 00:22:48.049 ] 00:22:48.049 }' 00:22:48.049 16:40:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:48.049 16:40:19 -- common/autotest_common.sh@10 -- # set +x 00:22:48.616 16:40:19 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:48.874 [2024-07-13 16:40:20.139263] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.874 16:40:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.133 16:40:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:49.133 "name": "Existed_Raid", 00:22:49.133 "uuid": "6209ed1d-ec3f-4b70-8360-9df2ce78e011", 00:22:49.133 "strip_size_kb": 64, 00:22:49.133 "state": "online", 00:22:49.133 "raid_level": "raid5f", 00:22:49.133 "superblock": true, 00:22:49.133 "num_base_bdevs": 3, 00:22:49.133 "num_base_bdevs_discovered": 2, 00:22:49.133 "num_base_bdevs_operational": 2, 00:22:49.133 "base_bdevs_list": [ 00:22:49.133 { 00:22:49.133 "name": null, 00:22:49.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.133 "is_configured": false, 00:22:49.133 "data_offset": 2048, 00:22:49.133 "data_size": 63488 00:22:49.133 }, 00:22:49.133 { 00:22:49.133 "name": "BaseBdev2", 00:22:49.133 "uuid": "e3de22f1-4ece-4962-afa8-7ce0d7e21a4b", 00:22:49.133 "is_configured": true, 00:22:49.133 "data_offset": 2048, 00:22:49.133 "data_size": 63488 00:22:49.133 }, 00:22:49.133 { 00:22:49.133 "name": "BaseBdev3", 00:22:49.133 "uuid": "fc6a0fc9-f0f9-4b2e-9d9e-3b19530d4390", 00:22:49.133 "is_configured": true, 00:22:49.133 "data_offset": 2048, 00:22:49.133 "data_size": 63488 00:22:49.133 } 00:22:49.133 ] 00:22:49.133 }' 00:22:49.133 16:40:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:49.133 16:40:20 -- common/autotest_common.sh@10 -- # set +x 00:22:49.700 16:40:21 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:49.700 16:40:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:49.700 16:40:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.700 16:40:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:49.968 16:40:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:49.968 16:40:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:49.968 16:40:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:50.242 [2024-07-13 16:40:21.439690] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:50.242 [2024-07-13 16:40:21.439994] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:50.242 [2024-07-13 16:40:21.440220] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:50.242 16:40:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:50.242 16:40:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:50.242 16:40:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.242 16:40:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:50.501 16:40:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:50.501 16:40:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:50.501 16:40:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:50.501 [2024-07-13 16:40:21.969562] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:50.501 [2024-07-13 16:40:21.969922] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:22:50.759 16:40:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:50.759 16:40:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:50.759 16:40:22 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.759 16:40:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:50.759 16:40:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:50.759 16:40:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:50.759 16:40:22 -- bdev/bdev_raid.sh@287 -- # killprocess 137849 00:22:50.759 16:40:22 -- common/autotest_common.sh@926 -- # '[' -z 137849 ']' 00:22:50.759 16:40:22 -- common/autotest_common.sh@930 -- # kill -0 137849 00:22:50.759 16:40:22 -- common/autotest_common.sh@931 -- # uname 00:22:50.759 16:40:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:50.759 16:40:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137849 00:22:50.759 16:40:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:50.759 16:40:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:50.759 16:40:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137849' 00:22:50.759 killing process with pid 137849 00:22:50.759 16:40:22 -- common/autotest_common.sh@945 -- # kill 137849 00:22:50.759 [2024-07-13 16:40:22.226747] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:50.759 16:40:22 -- common/autotest_common.sh@950 -- # wait 137849 00:22:50.759 [2024-07-13 16:40:22.227053] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:51.327 00:22:51.327 real 0m11.898s 00:22:51.327 user 0m20.785s 00:22:51.327 sys 0m2.259s 00:22:51.327 16:40:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:51.327 16:40:22 -- common/autotest_common.sh@10 -- # set +x 00:22:51.327 ************************************ 00:22:51.327 END TEST raid5f_state_function_test_sb 00:22:51.327 ************************************ 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:22:51.327 16:40:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:22:51.327 16:40:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:51.327 16:40:22 -- common/autotest_common.sh@10 -- # set +x 00:22:51.327 ************************************ 00:22:51.327 START TEST raid5f_superblock_test 00:22:51.327 ************************************ 00:22:51.327 16:40:22 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:22:51.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@357 -- # raid_pid=138235 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:51.327 16:40:22 -- bdev/bdev_raid.sh@358 -- # waitforlisten 138235 /var/tmp/spdk-raid.sock 00:22:51.327 16:40:22 -- common/autotest_common.sh@819 -- # '[' -z 138235 ']' 00:22:51.327 16:40:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:51.328 16:40:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:51.328 16:40:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:51.328 16:40:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:51.328 16:40:22 -- common/autotest_common.sh@10 -- # set +x 00:22:51.328 [2024-07-13 16:40:22.756674] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:51.328 [2024-07-13 16:40:22.757160] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138235 ] 00:22:51.589 [2024-07-13 16:40:22.900579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.589 [2024-07-13 16:40:22.984071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.847 [2024-07-13 16:40:23.063327] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:52.414 16:40:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:52.414 16:40:23 -- common/autotest_common.sh@852 -- # return 0 00:22:52.414 16:40:23 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:22:52.414 16:40:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:52.414 16:40:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:22:52.414 16:40:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:22:52.414 16:40:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:52.414 16:40:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:52.414 16:40:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:52.414 16:40:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:52.414 16:40:23 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:52.414 malloc1 00:22:52.673 16:40:23 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:52.673 [2024-07-13 16:40:24.124992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:52.673 [2024-07-13 16:40:24.125307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.673 [2024-07-13 16:40:24.125480] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:22:52.673 [2024-07-13 16:40:24.125611] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.673 [2024-07-13 16:40:24.128837] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.673 [2024-07-13 16:40:24.129020] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:52.673 pt1 00:22:52.932 16:40:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:52.932 16:40:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:52.932 16:40:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:22:52.932 16:40:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:22:52.932 16:40:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:52.932 16:40:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:52.932 16:40:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:52.932 16:40:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:52.932 16:40:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:52.932 malloc2 00:22:53.191 16:40:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:53.191 [2024-07-13 16:40:24.577251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:53.191 [2024-07-13 16:40:24.577627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.191 [2024-07-13 16:40:24.577704] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:22:53.191 [2024-07-13 16:40:24.577848] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.191 [2024-07-13 16:40:24.580651] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.191 [2024-07-13 16:40:24.580829] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:53.191 pt2 00:22:53.191 16:40:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:53.191 16:40:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:53.191 16:40:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:22:53.191 16:40:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:22:53.191 16:40:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:53.191 16:40:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:53.191 16:40:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:53.191 16:40:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:53.191 16:40:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:53.451 malloc3 00:22:53.451 16:40:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:53.710 [2024-07-13 16:40:24.956383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:53.710 [2024-07-13 16:40:24.956727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.710 [2024-07-13 16:40:24.956814] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:53.710 [2024-07-13 16:40:24.956931] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.710 [2024-07-13 16:40:24.959843] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.710 [2024-07-13 16:40:24.960016] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:53.710 pt3 00:22:53.710 16:40:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:53.710 16:40:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:53.710 16:40:24 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:22:53.710 [2024-07-13 16:40:25.136566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:53.710 [2024-07-13 16:40:25.139410] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:53.710 [2024-07-13 16:40:25.139609] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:53.711 [2024-07-13 16:40:25.139884] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:22:53.711 [2024-07-13 16:40:25.139999] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:53.711 [2024-07-13 16:40:25.140220] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:22:53.711 [2024-07-13 16:40:25.141147] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:22:53.711 [2024-07-13 16:40:25.141275] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:22:53.711 [2024-07-13 16:40:25.141685] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.711 16:40:25 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:53.711 16:40:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:53.711 16:40:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:53.711 16:40:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:53.711 16:40:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:53.711 16:40:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:53.711 16:40:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:53.711 16:40:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:53.711 16:40:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:53.711 16:40:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:53.711 16:40:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.711 16:40:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.969 16:40:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:53.970 "name": "raid_bdev1", 00:22:53.970 "uuid": "3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae", 00:22:53.970 "strip_size_kb": 64, 00:22:53.970 "state": "online", 00:22:53.970 "raid_level": "raid5f", 00:22:53.970 "superblock": true, 00:22:53.970 "num_base_bdevs": 3, 00:22:53.970 "num_base_bdevs_discovered": 3, 00:22:53.970 "num_base_bdevs_operational": 3, 00:22:53.970 "base_bdevs_list": [ 00:22:53.970 { 00:22:53.970 "name": "pt1", 00:22:53.970 "uuid": "4728a93b-35c2-5a2a-9d50-392b1e6ac5df", 00:22:53.970 "is_configured": true, 00:22:53.970 "data_offset": 2048, 00:22:53.970 "data_size": 63488 00:22:53.970 }, 00:22:53.970 { 00:22:53.970 "name": "pt2", 00:22:53.970 "uuid": "9f04e192-46cb-5cf4-b6e1-de1bcebf7843", 00:22:53.970 "is_configured": true, 00:22:53.970 "data_offset": 2048, 00:22:53.970 "data_size": 63488 00:22:53.970 }, 00:22:53.970 { 00:22:53.970 "name": "pt3", 00:22:53.970 "uuid": "754f1a9b-6777-5134-918a-d7f1d3f07ff7", 00:22:53.970 "is_configured": true, 00:22:53.970 "data_offset": 2048, 00:22:53.970 "data_size": 63488 00:22:53.970 } 00:22:53.970 ] 00:22:53.970 }' 00:22:53.970 16:40:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:53.970 16:40:25 -- common/autotest_common.sh@10 -- # set +x 00:22:54.537 16:40:25 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:22:54.537 16:40:25 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:54.796 [2024-07-13 16:40:26.201985] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.796 16:40:26 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae 00:22:54.796 16:40:26 -- bdev/bdev_raid.sh@380 -- # '[' -z 3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae ']' 00:22:54.796 16:40:26 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:55.056 [2024-07-13 16:40:26.465865] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.056 [2024-07-13 16:40:26.466035] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.056 [2024-07-13 16:40:26.466294] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.056 [2024-07-13 16:40:26.466512] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:55.056 [2024-07-13 16:40:26.466607] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:22:55.056 16:40:26 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.056 16:40:26 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:22:55.315 16:40:26 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:22:55.315 16:40:26 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:22:55.315 16:40:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:55.315 16:40:26 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:55.574 16:40:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:55.574 16:40:26 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:55.833 16:40:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:55.833 16:40:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:55.833 16:40:27 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:55.833 16:40:27 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:56.093 16:40:27 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:22:56.093 16:40:27 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:56.093 16:40:27 -- common/autotest_common.sh@640 -- # local es=0 00:22:56.093 16:40:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:56.093 16:40:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:56.093 16:40:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:56.093 16:40:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:56.093 16:40:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:56.093 16:40:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:56.093 16:40:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:56.093 16:40:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:56.093 16:40:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:56.093 16:40:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:56.353 [2024-07-13 16:40:27.638071] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:56.353 [2024-07-13 16:40:27.640824] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:56.353 [2024-07-13 16:40:27.640993] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:56.353 [2024-07-13 16:40:27.641078] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:22:56.353 [2024-07-13 16:40:27.641277] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:22:56.353 [2024-07-13 16:40:27.641422] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:22:56.353 [2024-07-13 16:40:27.641512] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:56.353 [2024-07-13 16:40:27.641587] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:22:56.353 request: 00:22:56.353 { 00:22:56.353 "name": "raid_bdev1", 00:22:56.353 "raid_level": "raid5f", 00:22:56.353 "base_bdevs": [ 00:22:56.353 "malloc1", 00:22:56.353 "malloc2", 00:22:56.353 "malloc3" 00:22:56.353 ], 00:22:56.353 "superblock": false, 00:22:56.353 "strip_size_kb": 64, 00:22:56.353 "method": "bdev_raid_create", 00:22:56.353 "req_id": 1 00:22:56.353 } 00:22:56.353 Got JSON-RPC error response 00:22:56.353 response: 00:22:56.353 { 00:22:56.353 "code": -17, 00:22:56.353 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:56.353 } 00:22:56.353 16:40:27 -- common/autotest_common.sh@643 -- # es=1 00:22:56.353 16:40:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:56.353 16:40:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:56.353 16:40:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:56.353 16:40:27 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.353 16:40:27 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:22:56.613 16:40:27 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:22:56.613 16:40:27 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:22:56.613 16:40:27 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:56.613 [2024-07-13 16:40:28.054217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:56.613 [2024-07-13 16:40:28.054616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.613 [2024-07-13 16:40:28.054695] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:56.613 [2024-07-13 16:40:28.054800] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.613 [2024-07-13 16:40:28.057663] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.613 [2024-07-13 16:40:28.057828] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:56.613 [2024-07-13 16:40:28.058034] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:56.613 [2024-07-13 16:40:28.058227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:56.613 pt1 00:22:56.613 16:40:28 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:56.613 16:40:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:56.613 16:40:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:56.613 16:40:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:56.613 16:40:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:56.613 16:40:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:56.613 16:40:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:56.613 16:40:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:56.613 16:40:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:56.613 16:40:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:56.613 16:40:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.613 16:40:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.873 16:40:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:56.873 "name": "raid_bdev1", 00:22:56.873 "uuid": "3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae", 00:22:56.873 "strip_size_kb": 64, 00:22:56.873 "state": "configuring", 00:22:56.873 "raid_level": "raid5f", 00:22:56.873 "superblock": true, 00:22:56.873 "num_base_bdevs": 3, 00:22:56.873 "num_base_bdevs_discovered": 1, 00:22:56.873 "num_base_bdevs_operational": 3, 00:22:56.873 "base_bdevs_list": [ 00:22:56.873 { 00:22:56.873 "name": "pt1", 00:22:56.873 "uuid": "4728a93b-35c2-5a2a-9d50-392b1e6ac5df", 00:22:56.873 "is_configured": true, 00:22:56.873 "data_offset": 2048, 00:22:56.873 "data_size": 63488 00:22:56.873 }, 00:22:56.873 { 00:22:56.873 "name": null, 00:22:56.873 "uuid": "9f04e192-46cb-5cf4-b6e1-de1bcebf7843", 00:22:56.873 "is_configured": false, 00:22:56.873 "data_offset": 2048, 00:22:56.873 "data_size": 63488 00:22:56.873 }, 00:22:56.873 { 00:22:56.873 "name": null, 00:22:56.873 "uuid": "754f1a9b-6777-5134-918a-d7f1d3f07ff7", 00:22:56.873 "is_configured": false, 00:22:56.873 "data_offset": 2048, 00:22:56.873 "data_size": 63488 00:22:56.873 } 00:22:56.873 ] 00:22:56.873 }' 00:22:56.873 16:40:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:56.873 16:40:28 -- common/autotest_common.sh@10 -- # set +x 00:22:57.441 16:40:28 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:22:57.441 16:40:28 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:57.700 [2024-07-13 16:40:29.038675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:57.700 [2024-07-13 16:40:29.038956] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.700 [2024-07-13 16:40:29.039036] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:22:57.700 [2024-07-13 16:40:29.039156] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.700 [2024-07-13 16:40:29.039671] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.700 [2024-07-13 16:40:29.039816] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:57.700 [2024-07-13 16:40:29.040006] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:57.700 [2024-07-13 16:40:29.040103] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:57.700 pt2 00:22:57.700 16:40:29 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:57.959 [2024-07-13 16:40:29.290779] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:57.959 16:40:29 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:57.959 16:40:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:57.959 16:40:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:57.959 16:40:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:57.959 16:40:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:57.959 16:40:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:57.959 16:40:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:57.959 16:40:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:57.959 16:40:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:57.959 16:40:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:57.959 16:40:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.959 16:40:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.218 16:40:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:58.218 "name": "raid_bdev1", 00:22:58.218 "uuid": "3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae", 00:22:58.218 "strip_size_kb": 64, 00:22:58.218 "state": "configuring", 00:22:58.218 "raid_level": "raid5f", 00:22:58.218 "superblock": true, 00:22:58.218 "num_base_bdevs": 3, 00:22:58.218 "num_base_bdevs_discovered": 1, 00:22:58.218 "num_base_bdevs_operational": 3, 00:22:58.218 "base_bdevs_list": [ 00:22:58.218 { 00:22:58.218 "name": "pt1", 00:22:58.218 "uuid": "4728a93b-35c2-5a2a-9d50-392b1e6ac5df", 00:22:58.218 "is_configured": true, 00:22:58.218 "data_offset": 2048, 00:22:58.218 "data_size": 63488 00:22:58.218 }, 00:22:58.218 { 00:22:58.218 "name": null, 00:22:58.218 "uuid": "9f04e192-46cb-5cf4-b6e1-de1bcebf7843", 00:22:58.218 "is_configured": false, 00:22:58.218 "data_offset": 2048, 00:22:58.218 "data_size": 63488 00:22:58.218 }, 00:22:58.218 { 00:22:58.218 "name": null, 00:22:58.218 "uuid": "754f1a9b-6777-5134-918a-d7f1d3f07ff7", 00:22:58.218 "is_configured": false, 00:22:58.218 "data_offset": 2048, 00:22:58.218 "data_size": 63488 00:22:58.218 } 00:22:58.218 ] 00:22:58.218 }' 00:22:58.218 16:40:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:58.218 16:40:29 -- common/autotest_common.sh@10 -- # set +x 00:22:58.785 16:40:30 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:22:58.785 16:40:30 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:58.785 16:40:30 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:59.044 [2024-07-13 16:40:30.390896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:59.044 [2024-07-13 16:40:30.391229] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.044 [2024-07-13 16:40:30.391303] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:59.044 [2024-07-13 16:40:30.391404] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.044 [2024-07-13 16:40:30.391970] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.044 [2024-07-13 16:40:30.392117] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:59.044 [2024-07-13 16:40:30.392318] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:59.044 [2024-07-13 16:40:30.392417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:59.044 pt2 00:22:59.044 16:40:30 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:59.044 16:40:30 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:59.044 16:40:30 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:59.303 [2024-07-13 16:40:30.659023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:59.303 [2024-07-13 16:40:30.659335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.303 [2024-07-13 16:40:30.659408] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:59.303 [2024-07-13 16:40:30.659512] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.303 [2024-07-13 16:40:30.660079] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.303 [2024-07-13 16:40:30.660225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:59.303 [2024-07-13 16:40:30.660433] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:59.303 [2024-07-13 16:40:30.660562] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:59.303 [2024-07-13 16:40:30.660758] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:22:59.303 [2024-07-13 16:40:30.660860] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:59.303 [2024-07-13 16:40:30.660960] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:22:59.303 [2024-07-13 16:40:30.661614] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:22:59.303 [2024-07-13 16:40:30.661731] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:22:59.303 [2024-07-13 16:40:30.661917] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.303 pt3 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.303 16:40:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.562 16:40:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:59.562 "name": "raid_bdev1", 00:22:59.562 "uuid": "3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae", 00:22:59.562 "strip_size_kb": 64, 00:22:59.562 "state": "online", 00:22:59.562 "raid_level": "raid5f", 00:22:59.562 "superblock": true, 00:22:59.562 "num_base_bdevs": 3, 00:22:59.562 "num_base_bdevs_discovered": 3, 00:22:59.562 "num_base_bdevs_operational": 3, 00:22:59.562 "base_bdevs_list": [ 00:22:59.562 { 00:22:59.562 "name": "pt1", 00:22:59.562 "uuid": "4728a93b-35c2-5a2a-9d50-392b1e6ac5df", 00:22:59.562 "is_configured": true, 00:22:59.562 "data_offset": 2048, 00:22:59.562 "data_size": 63488 00:22:59.562 }, 00:22:59.562 { 00:22:59.562 "name": "pt2", 00:22:59.562 "uuid": "9f04e192-46cb-5cf4-b6e1-de1bcebf7843", 00:22:59.562 "is_configured": true, 00:22:59.562 "data_offset": 2048, 00:22:59.562 "data_size": 63488 00:22:59.562 }, 00:22:59.562 { 00:22:59.562 "name": "pt3", 00:22:59.562 "uuid": "754f1a9b-6777-5134-918a-d7f1d3f07ff7", 00:22:59.562 "is_configured": true, 00:22:59.562 "data_offset": 2048, 00:22:59.562 "data_size": 63488 00:22:59.562 } 00:22:59.562 ] 00:22:59.562 }' 00:22:59.562 16:40:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:59.562 16:40:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.166 16:40:31 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:00.166 16:40:31 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:00.166 [2024-07-13 16:40:31.604465] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:00.166 16:40:31 -- bdev/bdev_raid.sh@430 -- # '[' 3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae '!=' 3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae ']' 00:23:00.166 16:40:31 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:00.166 16:40:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:00.166 16:40:31 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:00.166 16:40:31 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:00.429 [2024-07-13 16:40:31.872373] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:00.429 16:40:31 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:00.429 16:40:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:00.429 16:40:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:00.429 16:40:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:00.429 16:40:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:00.429 16:40:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:00.429 16:40:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:00.429 16:40:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:00.429 16:40:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:00.429 16:40:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:00.689 16:40:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.689 16:40:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.689 16:40:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.689 "name": "raid_bdev1", 00:23:00.689 "uuid": "3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae", 00:23:00.689 "strip_size_kb": 64, 00:23:00.689 "state": "online", 00:23:00.689 "raid_level": "raid5f", 00:23:00.689 "superblock": true, 00:23:00.689 "num_base_bdevs": 3, 00:23:00.689 "num_base_bdevs_discovered": 2, 00:23:00.689 "num_base_bdevs_operational": 2, 00:23:00.689 "base_bdevs_list": [ 00:23:00.689 { 00:23:00.689 "name": null, 00:23:00.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.689 "is_configured": false, 00:23:00.689 "data_offset": 2048, 00:23:00.689 "data_size": 63488 00:23:00.689 }, 00:23:00.689 { 00:23:00.689 "name": "pt2", 00:23:00.689 "uuid": "9f04e192-46cb-5cf4-b6e1-de1bcebf7843", 00:23:00.689 "is_configured": true, 00:23:00.689 "data_offset": 2048, 00:23:00.689 "data_size": 63488 00:23:00.689 }, 00:23:00.689 { 00:23:00.689 "name": "pt3", 00:23:00.689 "uuid": "754f1a9b-6777-5134-918a-d7f1d3f07ff7", 00:23:00.689 "is_configured": true, 00:23:00.689 "data_offset": 2048, 00:23:00.689 "data_size": 63488 00:23:00.689 } 00:23:00.689 ] 00:23:00.689 }' 00:23:00.689 16:40:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.689 16:40:32 -- common/autotest_common.sh@10 -- # set +x 00:23:01.256 16:40:32 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:01.525 [2024-07-13 16:40:32.768566] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:01.525 [2024-07-13 16:40:32.768823] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:01.525 [2024-07-13 16:40:32.769059] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:01.525 [2024-07-13 16:40:32.769168] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:01.525 [2024-07-13 16:40:32.769383] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:23:01.525 16:40:32 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.525 16:40:32 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:01.788 16:40:33 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:01.788 16:40:33 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:01.788 16:40:33 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:01.788 16:40:33 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:01.789 16:40:33 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:01.789 16:40:33 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:01.789 16:40:33 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:01.789 16:40:33 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:02.044 16:40:33 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:02.044 16:40:33 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:02.044 16:40:33 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:02.044 16:40:33 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:02.044 16:40:33 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:02.302 [2024-07-13 16:40:33.688662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:02.302 [2024-07-13 16:40:33.688927] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:02.302 [2024-07-13 16:40:33.689011] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:02.302 [2024-07-13 16:40:33.689115] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:02.302 [2024-07-13 16:40:33.692165] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:02.302 [2024-07-13 16:40:33.692364] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:02.302 [2024-07-13 16:40:33.692583] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:02.302 [2024-07-13 16:40:33.692699] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:02.302 pt2 00:23:02.302 16:40:33 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:02.302 16:40:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:02.302 16:40:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:02.302 16:40:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:02.302 16:40:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:02.302 16:40:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:02.302 16:40:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:02.302 16:40:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:02.302 16:40:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:02.302 16:40:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:02.302 16:40:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.302 16:40:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.561 16:40:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:02.561 "name": "raid_bdev1", 00:23:02.561 "uuid": "3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae", 00:23:02.561 "strip_size_kb": 64, 00:23:02.561 "state": "configuring", 00:23:02.561 "raid_level": "raid5f", 00:23:02.561 "superblock": true, 00:23:02.561 "num_base_bdevs": 3, 00:23:02.561 "num_base_bdevs_discovered": 1, 00:23:02.561 "num_base_bdevs_operational": 2, 00:23:02.561 "base_bdevs_list": [ 00:23:02.561 { 00:23:02.561 "name": null, 00:23:02.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.561 "is_configured": false, 00:23:02.561 "data_offset": 2048, 00:23:02.561 "data_size": 63488 00:23:02.561 }, 00:23:02.561 { 00:23:02.561 "name": "pt2", 00:23:02.561 "uuid": "9f04e192-46cb-5cf4-b6e1-de1bcebf7843", 00:23:02.561 "is_configured": true, 00:23:02.561 "data_offset": 2048, 00:23:02.561 "data_size": 63488 00:23:02.561 }, 00:23:02.561 { 00:23:02.561 "name": null, 00:23:02.561 "uuid": "754f1a9b-6777-5134-918a-d7f1d3f07ff7", 00:23:02.561 "is_configured": false, 00:23:02.561 "data_offset": 2048, 00:23:02.561 "data_size": 63488 00:23:02.561 } 00:23:02.561 ] 00:23:02.561 }' 00:23:02.561 16:40:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:02.561 16:40:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.128 16:40:34 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:03.128 16:40:34 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:03.128 16:40:34 -- bdev/bdev_raid.sh@462 -- # i=2 00:23:03.128 16:40:34 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:03.387 [2024-07-13 16:40:34.664895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:03.387 [2024-07-13 16:40:34.665257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.387 [2024-07-13 16:40:34.665352] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:03.387 [2024-07-13 16:40:34.665567] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.387 [2024-07-13 16:40:34.666110] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.387 [2024-07-13 16:40:34.666261] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:03.387 [2024-07-13 16:40:34.666464] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:03.387 [2024-07-13 16:40:34.666588] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:03.387 [2024-07-13 16:40:34.666752] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:23:03.387 [2024-07-13 16:40:34.666845] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:03.387 [2024-07-13 16:40:34.666958] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:23:03.387 [2024-07-13 16:40:34.667685] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:23:03.387 [2024-07-13 16:40:34.667798] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:23:03.387 [2024-07-13 16:40:34.668133] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.387 pt3 00:23:03.387 16:40:34 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:03.387 16:40:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:03.387 16:40:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:03.387 16:40:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:03.387 16:40:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:03.387 16:40:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:03.387 16:40:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:03.387 16:40:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:03.387 16:40:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:03.387 16:40:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:03.387 16:40:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.387 16:40:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.645 16:40:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.645 "name": "raid_bdev1", 00:23:03.645 "uuid": "3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae", 00:23:03.645 "strip_size_kb": 64, 00:23:03.645 "state": "online", 00:23:03.645 "raid_level": "raid5f", 00:23:03.645 "superblock": true, 00:23:03.645 "num_base_bdevs": 3, 00:23:03.645 "num_base_bdevs_discovered": 2, 00:23:03.645 "num_base_bdevs_operational": 2, 00:23:03.645 "base_bdevs_list": [ 00:23:03.646 { 00:23:03.646 "name": null, 00:23:03.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.646 "is_configured": false, 00:23:03.646 "data_offset": 2048, 00:23:03.646 "data_size": 63488 00:23:03.646 }, 00:23:03.646 { 00:23:03.646 "name": "pt2", 00:23:03.646 "uuid": "9f04e192-46cb-5cf4-b6e1-de1bcebf7843", 00:23:03.646 "is_configured": true, 00:23:03.646 "data_offset": 2048, 00:23:03.646 "data_size": 63488 00:23:03.646 }, 00:23:03.646 { 00:23:03.646 "name": "pt3", 00:23:03.646 "uuid": "754f1a9b-6777-5134-918a-d7f1d3f07ff7", 00:23:03.646 "is_configured": true, 00:23:03.646 "data_offset": 2048, 00:23:03.646 "data_size": 63488 00:23:03.646 } 00:23:03.646 ] 00:23:03.646 }' 00:23:03.646 16:40:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.646 16:40:34 -- common/autotest_common.sh@10 -- # set +x 00:23:04.212 16:40:35 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:23:04.212 16:40:35 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:04.471 [2024-07-13 16:40:35.742609] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:04.471 [2024-07-13 16:40:35.742935] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:04.471 [2024-07-13 16:40:35.743151] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:04.471 [2024-07-13 16:40:35.743329] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:04.471 [2024-07-13 16:40:35.743419] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:23:04.471 16:40:35 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.471 16:40:35 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:04.730 16:40:35 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:04.730 16:40:35 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:04.730 16:40:35 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:04.730 [2024-07-13 16:40:36.158655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:04.730 [2024-07-13 16:40:36.158966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.730 [2024-07-13 16:40:36.159100] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:04.730 [2024-07-13 16:40:36.159206] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.730 [2024-07-13 16:40:36.162268] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.730 [2024-07-13 16:40:36.162443] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:04.730 [2024-07-13 16:40:36.162635] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:04.730 [2024-07-13 16:40:36.162744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:04.730 pt1 00:23:04.730 16:40:36 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:04.730 16:40:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:04.730 16:40:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:04.730 16:40:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:04.730 16:40:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:04.730 16:40:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:04.730 16:40:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:04.730 16:40:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:04.730 16:40:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:04.730 16:40:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:04.730 16:40:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.730 16:40:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.988 16:40:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:04.988 "name": "raid_bdev1", 00:23:04.988 "uuid": "3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae", 00:23:04.988 "strip_size_kb": 64, 00:23:04.988 "state": "configuring", 00:23:04.988 "raid_level": "raid5f", 00:23:04.988 "superblock": true, 00:23:04.988 "num_base_bdevs": 3, 00:23:04.988 "num_base_bdevs_discovered": 1, 00:23:04.988 "num_base_bdevs_operational": 3, 00:23:04.988 "base_bdevs_list": [ 00:23:04.988 { 00:23:04.988 "name": "pt1", 00:23:04.988 "uuid": "4728a93b-35c2-5a2a-9d50-392b1e6ac5df", 00:23:04.988 "is_configured": true, 00:23:04.988 "data_offset": 2048, 00:23:04.988 "data_size": 63488 00:23:04.988 }, 00:23:04.988 { 00:23:04.988 "name": null, 00:23:04.988 "uuid": "9f04e192-46cb-5cf4-b6e1-de1bcebf7843", 00:23:04.988 "is_configured": false, 00:23:04.988 "data_offset": 2048, 00:23:04.988 "data_size": 63488 00:23:04.988 }, 00:23:04.988 { 00:23:04.988 "name": null, 00:23:04.988 "uuid": "754f1a9b-6777-5134-918a-d7f1d3f07ff7", 00:23:04.988 "is_configured": false, 00:23:04.988 "data_offset": 2048, 00:23:04.988 "data_size": 63488 00:23:04.988 } 00:23:04.988 ] 00:23:04.988 }' 00:23:04.988 16:40:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:04.988 16:40:36 -- common/autotest_common.sh@10 -- # set +x 00:23:05.556 16:40:36 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:05.556 16:40:36 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:05.556 16:40:36 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:05.815 16:40:37 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:05.815 16:40:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:05.815 16:40:37 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:06.074 16:40:37 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:06.074 16:40:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:06.074 16:40:37 -- bdev/bdev_raid.sh@489 -- # i=2 00:23:06.074 16:40:37 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:06.333 [2024-07-13 16:40:37.579025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:06.333 [2024-07-13 16:40:37.579403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.333 [2024-07-13 16:40:37.579475] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:06.333 [2024-07-13 16:40:37.579582] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.333 [2024-07-13 16:40:37.580153] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.333 [2024-07-13 16:40:37.580343] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:06.333 [2024-07-13 16:40:37.580572] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:06.333 [2024-07-13 16:40:37.580667] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:06.333 [2024-07-13 16:40:37.580761] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:06.333 [2024-07-13 16:40:37.580835] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:23:06.333 [2024-07-13 16:40:37.580991] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:06.333 pt3 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:06.333 "name": "raid_bdev1", 00:23:06.333 "uuid": "3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae", 00:23:06.333 "strip_size_kb": 64, 00:23:06.333 "state": "configuring", 00:23:06.333 "raid_level": "raid5f", 00:23:06.333 "superblock": true, 00:23:06.333 "num_base_bdevs": 3, 00:23:06.333 "num_base_bdevs_discovered": 1, 00:23:06.333 "num_base_bdevs_operational": 2, 00:23:06.333 "base_bdevs_list": [ 00:23:06.333 { 00:23:06.333 "name": null, 00:23:06.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.333 "is_configured": false, 00:23:06.333 "data_offset": 2048, 00:23:06.333 "data_size": 63488 00:23:06.333 }, 00:23:06.333 { 00:23:06.333 "name": null, 00:23:06.333 "uuid": "9f04e192-46cb-5cf4-b6e1-de1bcebf7843", 00:23:06.333 "is_configured": false, 00:23:06.333 "data_offset": 2048, 00:23:06.333 "data_size": 63488 00:23:06.333 }, 00:23:06.333 { 00:23:06.333 "name": "pt3", 00:23:06.333 "uuid": "754f1a9b-6777-5134-918a-d7f1d3f07ff7", 00:23:06.333 "is_configured": true, 00:23:06.333 "data_offset": 2048, 00:23:06.333 "data_size": 63488 00:23:06.333 } 00:23:06.333 ] 00:23:06.333 }' 00:23:06.333 16:40:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:06.333 16:40:37 -- common/autotest_common.sh@10 -- # set +x 00:23:06.901 16:40:38 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:06.901 16:40:38 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:06.901 16:40:38 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:07.158 [2024-07-13 16:40:38.527212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:07.159 [2024-07-13 16:40:38.527505] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.159 [2024-07-13 16:40:38.527583] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:07.159 [2024-07-13 16:40:38.527686] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.159 [2024-07-13 16:40:38.528241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.159 [2024-07-13 16:40:38.528417] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:07.159 [2024-07-13 16:40:38.528602] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:07.159 [2024-07-13 16:40:38.528703] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:07.159 [2024-07-13 16:40:38.528864] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:23:07.159 [2024-07-13 16:40:38.528953] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:07.159 [2024-07-13 16:40:38.529067] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:23:07.159 [2024-07-13 16:40:38.529853] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:23:07.159 [2024-07-13 16:40:38.529977] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:23:07.159 [2024-07-13 16:40:38.530224] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.159 pt2 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.159 16:40:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.416 16:40:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:07.416 "name": "raid_bdev1", 00:23:07.416 "uuid": "3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae", 00:23:07.416 "strip_size_kb": 64, 00:23:07.416 "state": "online", 00:23:07.416 "raid_level": "raid5f", 00:23:07.416 "superblock": true, 00:23:07.416 "num_base_bdevs": 3, 00:23:07.416 "num_base_bdevs_discovered": 2, 00:23:07.416 "num_base_bdevs_operational": 2, 00:23:07.416 "base_bdevs_list": [ 00:23:07.416 { 00:23:07.416 "name": null, 00:23:07.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.416 "is_configured": false, 00:23:07.416 "data_offset": 2048, 00:23:07.416 "data_size": 63488 00:23:07.416 }, 00:23:07.416 { 00:23:07.416 "name": "pt2", 00:23:07.416 "uuid": "9f04e192-46cb-5cf4-b6e1-de1bcebf7843", 00:23:07.416 "is_configured": true, 00:23:07.416 "data_offset": 2048, 00:23:07.416 "data_size": 63488 00:23:07.416 }, 00:23:07.416 { 00:23:07.416 "name": "pt3", 00:23:07.416 "uuid": "754f1a9b-6777-5134-918a-d7f1d3f07ff7", 00:23:07.416 "is_configured": true, 00:23:07.416 "data_offset": 2048, 00:23:07.416 "data_size": 63488 00:23:07.416 } 00:23:07.416 ] 00:23:07.416 }' 00:23:07.416 16:40:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:07.416 16:40:38 -- common/autotest_common.sh@10 -- # set +x 00:23:07.984 16:40:39 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:07.984 16:40:39 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:23:08.242 [2024-07-13 16:40:39.552800] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:08.242 16:40:39 -- bdev/bdev_raid.sh@506 -- # '[' 3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae '!=' 3c8bbe69-ad6a-4c99-a40e-bdf6e80a15ae ']' 00:23:08.242 16:40:39 -- bdev/bdev_raid.sh@511 -- # killprocess 138235 00:23:08.242 16:40:39 -- common/autotest_common.sh@926 -- # '[' -z 138235 ']' 00:23:08.242 16:40:39 -- common/autotest_common.sh@930 -- # kill -0 138235 00:23:08.242 16:40:39 -- common/autotest_common.sh@931 -- # uname 00:23:08.242 16:40:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:08.242 16:40:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138235 00:23:08.242 killing process with pid 138235 00:23:08.242 16:40:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:08.242 16:40:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:08.242 16:40:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138235' 00:23:08.242 16:40:39 -- common/autotest_common.sh@945 -- # kill 138235 00:23:08.242 16:40:39 -- common/autotest_common.sh@950 -- # wait 138235 00:23:08.242 [2024-07-13 16:40:39.603912] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:08.242 [2024-07-13 16:40:39.604006] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:08.242 [2024-07-13 16:40:39.604075] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:08.242 [2024-07-13 16:40:39.604083] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:23:08.242 [2024-07-13 16:40:39.667315] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@513 -- # return 0 00:23:08.810 00:23:08.810 real 0m17.370s 00:23:08.810 user 0m31.187s 00:23:08.810 ************************************ 00:23:08.810 END TEST raid5f_superblock_test 00:23:08.810 ************************************ 00:23:08.810 sys 0m3.201s 00:23:08.810 16:40:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.810 16:40:40 -- common/autotest_common.sh@10 -- # set +x 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:23:08.810 16:40:40 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:08.810 16:40:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:08.810 16:40:40 -- common/autotest_common.sh@10 -- # set +x 00:23:08.810 ************************************ 00:23:08.810 START TEST raid5f_rebuild_test 00:23:08.810 ************************************ 00:23:08.810 16:40:40 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@544 -- # raid_pid=138816 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@545 -- # waitforlisten 138816 /var/tmp/spdk-raid.sock 00:23:08.810 16:40:40 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:08.810 16:40:40 -- common/autotest_common.sh@819 -- # '[' -z 138816 ']' 00:23:08.810 16:40:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:08.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:08.810 16:40:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:08.810 16:40:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:08.810 16:40:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:08.810 16:40:40 -- common/autotest_common.sh@10 -- # set +x 00:23:08.810 [2024-07-13 16:40:40.223003] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:23:08.810 [2024-07-13 16:40:40.224212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138816 ] 00:23:08.810 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:08.810 Zero copy mechanism will not be used. 00:23:09.069 [2024-07-13 16:40:40.379799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.069 [2024-07-13 16:40:40.461280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.327 [2024-07-13 16:40:40.540708] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:09.894 16:40:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:09.894 16:40:41 -- common/autotest_common.sh@852 -- # return 0 00:23:09.894 16:40:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:09.894 16:40:41 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:09.894 16:40:41 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:09.894 BaseBdev1 00:23:09.894 16:40:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:09.894 16:40:41 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:09.894 16:40:41 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:10.153 BaseBdev2 00:23:10.153 16:40:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:10.153 16:40:41 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:10.153 16:40:41 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:10.423 BaseBdev3 00:23:10.423 16:40:41 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:10.681 spare_malloc 00:23:10.681 16:40:41 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:10.681 spare_delay 00:23:10.681 16:40:42 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:10.939 [2024-07-13 16:40:42.376053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:10.939 [2024-07-13 16:40:42.376473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.939 [2024-07-13 16:40:42.376656] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:10.939 [2024-07-13 16:40:42.376786] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.939 [2024-07-13 16:40:42.379830] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.939 [2024-07-13 16:40:42.380018] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:10.939 spare 00:23:10.939 16:40:42 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:11.196 [2024-07-13 16:40:42.564494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:11.196 [2024-07-13 16:40:42.567188] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:11.196 [2024-07-13 16:40:42.567374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:11.196 [2024-07-13 16:40:42.567514] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:23:11.197 [2024-07-13 16:40:42.567557] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:11.197 [2024-07-13 16:40:42.567858] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:23:11.197 [2024-07-13 16:40:42.568735] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:23:11.197 [2024-07-13 16:40:42.568851] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:23:11.197 [2024-07-13 16:40:42.569200] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.197 16:40:42 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:11.197 16:40:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:11.197 16:40:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:11.197 16:40:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:11.197 16:40:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:11.197 16:40:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:11.197 16:40:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:11.197 16:40:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:11.197 16:40:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:11.197 16:40:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:11.197 16:40:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.197 16:40:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.455 16:40:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:11.455 "name": "raid_bdev1", 00:23:11.455 "uuid": "8d1a29a5-d29a-45a8-bd6b-009773fe3586", 00:23:11.455 "strip_size_kb": 64, 00:23:11.455 "state": "online", 00:23:11.455 "raid_level": "raid5f", 00:23:11.455 "superblock": false, 00:23:11.455 "num_base_bdevs": 3, 00:23:11.455 "num_base_bdevs_discovered": 3, 00:23:11.455 "num_base_bdevs_operational": 3, 00:23:11.455 "base_bdevs_list": [ 00:23:11.455 { 00:23:11.455 "name": "BaseBdev1", 00:23:11.455 "uuid": "b860f923-8c60-4129-bf9a-f9948361addb", 00:23:11.455 "is_configured": true, 00:23:11.455 "data_offset": 0, 00:23:11.455 "data_size": 65536 00:23:11.455 }, 00:23:11.455 { 00:23:11.455 "name": "BaseBdev2", 00:23:11.455 "uuid": "6d8c10f9-7c36-4814-a1fd-0f351c24e0f0", 00:23:11.455 "is_configured": true, 00:23:11.455 "data_offset": 0, 00:23:11.455 "data_size": 65536 00:23:11.455 }, 00:23:11.455 { 00:23:11.455 "name": "BaseBdev3", 00:23:11.455 "uuid": "ee87ffa7-67db-4ba7-b680-221872b34d27", 00:23:11.455 "is_configured": true, 00:23:11.455 "data_offset": 0, 00:23:11.455 "data_size": 65536 00:23:11.455 } 00:23:11.455 ] 00:23:11.455 }' 00:23:11.455 16:40:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:11.455 16:40:42 -- common/autotest_common.sh@10 -- # set +x 00:23:12.020 16:40:43 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:12.020 16:40:43 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:12.278 [2024-07-13 16:40:43.573571] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:12.278 16:40:43 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:23:12.278 16:40:43 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.278 16:40:43 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:12.537 16:40:43 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:12.537 16:40:43 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:12.537 16:40:43 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:12.537 16:40:43 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:12.537 16:40:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:12.537 16:40:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:12.537 16:40:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:12.537 16:40:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:12.537 16:40:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:12.537 16:40:43 -- bdev/nbd_common.sh@12 -- # local i 00:23:12.537 16:40:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:12.537 16:40:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:12.537 16:40:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:12.795 [2024-07-13 16:40:44.021501] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:23:12.795 /dev/nbd0 00:23:12.795 16:40:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:12.795 16:40:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:12.795 16:40:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:12.795 16:40:44 -- common/autotest_common.sh@857 -- # local i 00:23:12.795 16:40:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:12.795 16:40:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:12.795 16:40:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:12.795 16:40:44 -- common/autotest_common.sh@861 -- # break 00:23:12.795 16:40:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:12.795 16:40:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:12.795 16:40:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:12.795 1+0 records in 00:23:12.795 1+0 records out 00:23:12.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474418 s, 8.6 MB/s 00:23:12.795 16:40:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:12.795 16:40:44 -- common/autotest_common.sh@874 -- # size=4096 00:23:12.795 16:40:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:12.795 16:40:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:12.795 16:40:44 -- common/autotest_common.sh@877 -- # return 0 00:23:12.795 16:40:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:12.795 16:40:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:12.795 16:40:44 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:12.795 16:40:44 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:12.795 16:40:44 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:12.795 16:40:44 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:23:13.053 512+0 records in 00:23:13.054 512+0 records out 00:23:13.054 67108864 bytes (67 MB, 64 MiB) copied, 0.332031 s, 202 MB/s 00:23:13.054 16:40:44 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:13.054 16:40:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:13.054 16:40:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:13.054 16:40:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:13.054 16:40:44 -- bdev/nbd_common.sh@51 -- # local i 00:23:13.054 16:40:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:13.054 16:40:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:13.312 [2024-07-13 16:40:44.699754] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.312 16:40:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:13.312 16:40:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:13.312 16:40:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:13.312 16:40:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:13.312 16:40:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:13.312 16:40:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:13.312 16:40:44 -- bdev/nbd_common.sh@41 -- # break 00:23:13.312 16:40:44 -- bdev/nbd_common.sh@45 -- # return 0 00:23:13.312 16:40:44 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:13.570 [2024-07-13 16:40:44.955283] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:13.570 16:40:44 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:13.570 16:40:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:13.570 16:40:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:13.570 16:40:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:13.570 16:40:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:13.570 16:40:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:13.570 16:40:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:13.570 16:40:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:13.570 16:40:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:13.570 16:40:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:13.570 16:40:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.570 16:40:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.829 16:40:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:13.829 "name": "raid_bdev1", 00:23:13.829 "uuid": "8d1a29a5-d29a-45a8-bd6b-009773fe3586", 00:23:13.829 "strip_size_kb": 64, 00:23:13.829 "state": "online", 00:23:13.829 "raid_level": "raid5f", 00:23:13.829 "superblock": false, 00:23:13.829 "num_base_bdevs": 3, 00:23:13.829 "num_base_bdevs_discovered": 2, 00:23:13.829 "num_base_bdevs_operational": 2, 00:23:13.829 "base_bdevs_list": [ 00:23:13.829 { 00:23:13.829 "name": null, 00:23:13.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.829 "is_configured": false, 00:23:13.829 "data_offset": 0, 00:23:13.829 "data_size": 65536 00:23:13.829 }, 00:23:13.829 { 00:23:13.829 "name": "BaseBdev2", 00:23:13.829 "uuid": "6d8c10f9-7c36-4814-a1fd-0f351c24e0f0", 00:23:13.829 "is_configured": true, 00:23:13.829 "data_offset": 0, 00:23:13.829 "data_size": 65536 00:23:13.829 }, 00:23:13.829 { 00:23:13.829 "name": "BaseBdev3", 00:23:13.829 "uuid": "ee87ffa7-67db-4ba7-b680-221872b34d27", 00:23:13.829 "is_configured": true, 00:23:13.829 "data_offset": 0, 00:23:13.829 "data_size": 65536 00:23:13.829 } 00:23:13.829 ] 00:23:13.829 }' 00:23:13.829 16:40:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:13.829 16:40:45 -- common/autotest_common.sh@10 -- # set +x 00:23:14.395 16:40:45 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:14.654 [2024-07-13 16:40:46.003453] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:14.654 [2024-07-13 16:40:46.003795] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:14.654 [2024-07-13 16:40:46.010973] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027990 00:23:14.654 [2024-07-13 16:40:46.014293] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:14.654 16:40:46 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:15.661 16:40:47 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:15.661 16:40:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:15.661 16:40:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:15.661 16:40:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:15.661 16:40:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:15.661 16:40:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.661 16:40:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.919 16:40:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:15.919 "name": "raid_bdev1", 00:23:15.919 "uuid": "8d1a29a5-d29a-45a8-bd6b-009773fe3586", 00:23:15.919 "strip_size_kb": 64, 00:23:15.919 "state": "online", 00:23:15.919 "raid_level": "raid5f", 00:23:15.919 "superblock": false, 00:23:15.919 "num_base_bdevs": 3, 00:23:15.919 "num_base_bdevs_discovered": 3, 00:23:15.919 "num_base_bdevs_operational": 3, 00:23:15.919 "process": { 00:23:15.919 "type": "rebuild", 00:23:15.919 "target": "spare", 00:23:15.919 "progress": { 00:23:15.919 "blocks": 24576, 00:23:15.919 "percent": 18 00:23:15.919 } 00:23:15.919 }, 00:23:15.919 "base_bdevs_list": [ 00:23:15.919 { 00:23:15.919 "name": "spare", 00:23:15.919 "uuid": "14530a32-b2eb-52cc-a7c4-8e1c80405678", 00:23:15.919 "is_configured": true, 00:23:15.919 "data_offset": 0, 00:23:15.919 "data_size": 65536 00:23:15.919 }, 00:23:15.919 { 00:23:15.919 "name": "BaseBdev2", 00:23:15.919 "uuid": "6d8c10f9-7c36-4814-a1fd-0f351c24e0f0", 00:23:15.919 "is_configured": true, 00:23:15.919 "data_offset": 0, 00:23:15.919 "data_size": 65536 00:23:15.919 }, 00:23:15.919 { 00:23:15.919 "name": "BaseBdev3", 00:23:15.919 "uuid": "ee87ffa7-67db-4ba7-b680-221872b34d27", 00:23:15.919 "is_configured": true, 00:23:15.919 "data_offset": 0, 00:23:15.919 "data_size": 65536 00:23:15.919 } 00:23:15.919 ] 00:23:15.919 }' 00:23:15.919 16:40:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:15.919 16:40:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:15.919 16:40:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:15.919 16:40:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:15.919 16:40:47 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:16.178 [2024-07-13 16:40:47.604369] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:16.178 [2024-07-13 16:40:47.630500] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:16.178 [2024-07-13 16:40:47.630872] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:16.436 "name": "raid_bdev1", 00:23:16.436 "uuid": "8d1a29a5-d29a-45a8-bd6b-009773fe3586", 00:23:16.436 "strip_size_kb": 64, 00:23:16.436 "state": "online", 00:23:16.436 "raid_level": "raid5f", 00:23:16.436 "superblock": false, 00:23:16.436 "num_base_bdevs": 3, 00:23:16.436 "num_base_bdevs_discovered": 2, 00:23:16.436 "num_base_bdevs_operational": 2, 00:23:16.436 "base_bdevs_list": [ 00:23:16.436 { 00:23:16.436 "name": null, 00:23:16.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.436 "is_configured": false, 00:23:16.436 "data_offset": 0, 00:23:16.436 "data_size": 65536 00:23:16.436 }, 00:23:16.436 { 00:23:16.436 "name": "BaseBdev2", 00:23:16.436 "uuid": "6d8c10f9-7c36-4814-a1fd-0f351c24e0f0", 00:23:16.436 "is_configured": true, 00:23:16.436 "data_offset": 0, 00:23:16.436 "data_size": 65536 00:23:16.436 }, 00:23:16.436 { 00:23:16.436 "name": "BaseBdev3", 00:23:16.436 "uuid": "ee87ffa7-67db-4ba7-b680-221872b34d27", 00:23:16.436 "is_configured": true, 00:23:16.436 "data_offset": 0, 00:23:16.436 "data_size": 65536 00:23:16.436 } 00:23:16.436 ] 00:23:16.436 }' 00:23:16.436 16:40:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:16.436 16:40:47 -- common/autotest_common.sh@10 -- # set +x 00:23:17.003 16:40:48 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:17.003 16:40:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:17.003 16:40:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:17.003 16:40:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:17.004 16:40:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:17.004 16:40:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.004 16:40:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.572 16:40:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:17.572 "name": "raid_bdev1", 00:23:17.572 "uuid": "8d1a29a5-d29a-45a8-bd6b-009773fe3586", 00:23:17.572 "strip_size_kb": 64, 00:23:17.572 "state": "online", 00:23:17.572 "raid_level": "raid5f", 00:23:17.572 "superblock": false, 00:23:17.572 "num_base_bdevs": 3, 00:23:17.572 "num_base_bdevs_discovered": 2, 00:23:17.572 "num_base_bdevs_operational": 2, 00:23:17.572 "base_bdevs_list": [ 00:23:17.572 { 00:23:17.572 "name": null, 00:23:17.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.572 "is_configured": false, 00:23:17.572 "data_offset": 0, 00:23:17.572 "data_size": 65536 00:23:17.572 }, 00:23:17.572 { 00:23:17.572 "name": "BaseBdev2", 00:23:17.572 "uuid": "6d8c10f9-7c36-4814-a1fd-0f351c24e0f0", 00:23:17.572 "is_configured": true, 00:23:17.572 "data_offset": 0, 00:23:17.572 "data_size": 65536 00:23:17.572 }, 00:23:17.572 { 00:23:17.572 "name": "BaseBdev3", 00:23:17.572 "uuid": "ee87ffa7-67db-4ba7-b680-221872b34d27", 00:23:17.572 "is_configured": true, 00:23:17.572 "data_offset": 0, 00:23:17.572 "data_size": 65536 00:23:17.572 } 00:23:17.572 ] 00:23:17.572 }' 00:23:17.572 16:40:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:17.572 16:40:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:17.572 16:40:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:17.572 16:40:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:17.572 16:40:48 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:17.572 [2024-07-13 16:40:49.041598] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:17.572 [2024-07-13 16:40:49.041937] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:17.832 [2024-07-13 16:40:49.049003] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027b30 00:23:17.832 [2024-07-13 16:40:49.051933] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:17.832 16:40:49 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:18.769 16:40:50 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.770 16:40:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:18.770 16:40:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:18.770 16:40:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:18.770 16:40:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:18.770 16:40:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.770 16:40:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:19.029 "name": "raid_bdev1", 00:23:19.029 "uuid": "8d1a29a5-d29a-45a8-bd6b-009773fe3586", 00:23:19.029 "strip_size_kb": 64, 00:23:19.029 "state": "online", 00:23:19.029 "raid_level": "raid5f", 00:23:19.029 "superblock": false, 00:23:19.029 "num_base_bdevs": 3, 00:23:19.029 "num_base_bdevs_discovered": 3, 00:23:19.029 "num_base_bdevs_operational": 3, 00:23:19.029 "process": { 00:23:19.029 "type": "rebuild", 00:23:19.029 "target": "spare", 00:23:19.029 "progress": { 00:23:19.029 "blocks": 24576, 00:23:19.029 "percent": 18 00:23:19.029 } 00:23:19.029 }, 00:23:19.029 "base_bdevs_list": [ 00:23:19.029 { 00:23:19.029 "name": "spare", 00:23:19.029 "uuid": "14530a32-b2eb-52cc-a7c4-8e1c80405678", 00:23:19.029 "is_configured": true, 00:23:19.029 "data_offset": 0, 00:23:19.029 "data_size": 65536 00:23:19.029 }, 00:23:19.029 { 00:23:19.029 "name": "BaseBdev2", 00:23:19.029 "uuid": "6d8c10f9-7c36-4814-a1fd-0f351c24e0f0", 00:23:19.029 "is_configured": true, 00:23:19.029 "data_offset": 0, 00:23:19.029 "data_size": 65536 00:23:19.029 }, 00:23:19.029 { 00:23:19.029 "name": "BaseBdev3", 00:23:19.029 "uuid": "ee87ffa7-67db-4ba7-b680-221872b34d27", 00:23:19.029 "is_configured": true, 00:23:19.029 "data_offset": 0, 00:23:19.029 "data_size": 65536 00:23:19.029 } 00:23:19.029 ] 00:23:19.029 }' 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@657 -- # local timeout=586 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.029 16:40:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.289 16:40:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:19.289 "name": "raid_bdev1", 00:23:19.289 "uuid": "8d1a29a5-d29a-45a8-bd6b-009773fe3586", 00:23:19.289 "strip_size_kb": 64, 00:23:19.289 "state": "online", 00:23:19.289 "raid_level": "raid5f", 00:23:19.289 "superblock": false, 00:23:19.289 "num_base_bdevs": 3, 00:23:19.289 "num_base_bdevs_discovered": 3, 00:23:19.289 "num_base_bdevs_operational": 3, 00:23:19.289 "process": { 00:23:19.289 "type": "rebuild", 00:23:19.289 "target": "spare", 00:23:19.289 "progress": { 00:23:19.289 "blocks": 30720, 00:23:19.289 "percent": 23 00:23:19.289 } 00:23:19.289 }, 00:23:19.289 "base_bdevs_list": [ 00:23:19.289 { 00:23:19.289 "name": "spare", 00:23:19.289 "uuid": "14530a32-b2eb-52cc-a7c4-8e1c80405678", 00:23:19.289 "is_configured": true, 00:23:19.289 "data_offset": 0, 00:23:19.289 "data_size": 65536 00:23:19.289 }, 00:23:19.289 { 00:23:19.289 "name": "BaseBdev2", 00:23:19.289 "uuid": "6d8c10f9-7c36-4814-a1fd-0f351c24e0f0", 00:23:19.289 "is_configured": true, 00:23:19.289 "data_offset": 0, 00:23:19.289 "data_size": 65536 00:23:19.289 }, 00:23:19.289 { 00:23:19.289 "name": "BaseBdev3", 00:23:19.289 "uuid": "ee87ffa7-67db-4ba7-b680-221872b34d27", 00:23:19.289 "is_configured": true, 00:23:19.289 "data_offset": 0, 00:23:19.289 "data_size": 65536 00:23:19.289 } 00:23:19.289 ] 00:23:19.289 }' 00:23:19.289 16:40:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:19.289 16:40:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:19.289 16:40:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:19.289 16:40:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:19.289 16:40:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:20.668 16:40:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:20.668 16:40:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:20.668 16:40:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:20.668 16:40:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:20.668 16:40:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:20.668 16:40:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:20.669 16:40:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.669 16:40:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.669 16:40:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:20.669 "name": "raid_bdev1", 00:23:20.669 "uuid": "8d1a29a5-d29a-45a8-bd6b-009773fe3586", 00:23:20.669 "strip_size_kb": 64, 00:23:20.669 "state": "online", 00:23:20.669 "raid_level": "raid5f", 00:23:20.669 "superblock": false, 00:23:20.669 "num_base_bdevs": 3, 00:23:20.669 "num_base_bdevs_discovered": 3, 00:23:20.669 "num_base_bdevs_operational": 3, 00:23:20.669 "process": { 00:23:20.669 "type": "rebuild", 00:23:20.669 "target": "spare", 00:23:20.669 "progress": { 00:23:20.669 "blocks": 57344, 00:23:20.669 "percent": 43 00:23:20.669 } 00:23:20.669 }, 00:23:20.669 "base_bdevs_list": [ 00:23:20.669 { 00:23:20.669 "name": "spare", 00:23:20.669 "uuid": "14530a32-b2eb-52cc-a7c4-8e1c80405678", 00:23:20.669 "is_configured": true, 00:23:20.669 "data_offset": 0, 00:23:20.669 "data_size": 65536 00:23:20.669 }, 00:23:20.669 { 00:23:20.669 "name": "BaseBdev2", 00:23:20.669 "uuid": "6d8c10f9-7c36-4814-a1fd-0f351c24e0f0", 00:23:20.669 "is_configured": true, 00:23:20.669 "data_offset": 0, 00:23:20.669 "data_size": 65536 00:23:20.669 }, 00:23:20.669 { 00:23:20.669 "name": "BaseBdev3", 00:23:20.669 "uuid": "ee87ffa7-67db-4ba7-b680-221872b34d27", 00:23:20.669 "is_configured": true, 00:23:20.669 "data_offset": 0, 00:23:20.669 "data_size": 65536 00:23:20.669 } 00:23:20.669 ] 00:23:20.669 }' 00:23:20.669 16:40:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:20.669 16:40:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:20.669 16:40:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:20.669 16:40:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:20.669 16:40:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:21.605 16:40:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:21.605 16:40:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:21.605 16:40:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:21.605 16:40:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:21.605 16:40:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:21.605 16:40:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:21.861 16:40:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.861 16:40:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.118 16:40:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:22.118 "name": "raid_bdev1", 00:23:22.118 "uuid": "8d1a29a5-d29a-45a8-bd6b-009773fe3586", 00:23:22.118 "strip_size_kb": 64, 00:23:22.118 "state": "online", 00:23:22.118 "raid_level": "raid5f", 00:23:22.118 "superblock": false, 00:23:22.118 "num_base_bdevs": 3, 00:23:22.118 "num_base_bdevs_discovered": 3, 00:23:22.118 "num_base_bdevs_operational": 3, 00:23:22.118 "process": { 00:23:22.118 "type": "rebuild", 00:23:22.118 "target": "spare", 00:23:22.118 "progress": { 00:23:22.118 "blocks": 86016, 00:23:22.119 "percent": 65 00:23:22.119 } 00:23:22.119 }, 00:23:22.119 "base_bdevs_list": [ 00:23:22.119 { 00:23:22.119 "name": "spare", 00:23:22.119 "uuid": "14530a32-b2eb-52cc-a7c4-8e1c80405678", 00:23:22.119 "is_configured": true, 00:23:22.119 "data_offset": 0, 00:23:22.119 "data_size": 65536 00:23:22.119 }, 00:23:22.119 { 00:23:22.119 "name": "BaseBdev2", 00:23:22.119 "uuid": "6d8c10f9-7c36-4814-a1fd-0f351c24e0f0", 00:23:22.119 "is_configured": true, 00:23:22.119 "data_offset": 0, 00:23:22.119 "data_size": 65536 00:23:22.119 }, 00:23:22.119 { 00:23:22.119 "name": "BaseBdev3", 00:23:22.119 "uuid": "ee87ffa7-67db-4ba7-b680-221872b34d27", 00:23:22.119 "is_configured": true, 00:23:22.119 "data_offset": 0, 00:23:22.119 "data_size": 65536 00:23:22.119 } 00:23:22.119 ] 00:23:22.119 }' 00:23:22.119 16:40:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:22.119 16:40:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:22.119 16:40:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:22.119 16:40:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:22.119 16:40:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:23.055 16:40:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:23.055 16:40:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:23.055 16:40:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:23.055 16:40:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:23.055 16:40:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:23.055 16:40:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:23.055 16:40:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.055 16:40:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.315 16:40:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:23.315 "name": "raid_bdev1", 00:23:23.315 "uuid": "8d1a29a5-d29a-45a8-bd6b-009773fe3586", 00:23:23.315 "strip_size_kb": 64, 00:23:23.315 "state": "online", 00:23:23.315 "raid_level": "raid5f", 00:23:23.315 "superblock": false, 00:23:23.315 "num_base_bdevs": 3, 00:23:23.315 "num_base_bdevs_discovered": 3, 00:23:23.315 "num_base_bdevs_operational": 3, 00:23:23.315 "process": { 00:23:23.315 "type": "rebuild", 00:23:23.315 "target": "spare", 00:23:23.315 "progress": { 00:23:23.315 "blocks": 112640, 00:23:23.315 "percent": 85 00:23:23.315 } 00:23:23.315 }, 00:23:23.315 "base_bdevs_list": [ 00:23:23.315 { 00:23:23.315 "name": "spare", 00:23:23.315 "uuid": "14530a32-b2eb-52cc-a7c4-8e1c80405678", 00:23:23.315 "is_configured": true, 00:23:23.315 "data_offset": 0, 00:23:23.315 "data_size": 65536 00:23:23.315 }, 00:23:23.315 { 00:23:23.315 "name": "BaseBdev2", 00:23:23.315 "uuid": "6d8c10f9-7c36-4814-a1fd-0f351c24e0f0", 00:23:23.315 "is_configured": true, 00:23:23.315 "data_offset": 0, 00:23:23.315 "data_size": 65536 00:23:23.315 }, 00:23:23.315 { 00:23:23.315 "name": "BaseBdev3", 00:23:23.315 "uuid": "ee87ffa7-67db-4ba7-b680-221872b34d27", 00:23:23.315 "is_configured": true, 00:23:23.315 "data_offset": 0, 00:23:23.315 "data_size": 65536 00:23:23.315 } 00:23:23.315 ] 00:23:23.315 }' 00:23:23.315 16:40:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:23.315 16:40:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:23.315 16:40:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:23.575 16:40:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:23.575 16:40:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:24.144 [2024-07-13 16:40:55.516882] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:24.144 [2024-07-13 16:40:55.517244] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:24.144 [2024-07-13 16:40:55.517510] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.403 16:40:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:24.403 16:40:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:24.403 16:40:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:24.403 16:40:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:24.403 16:40:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:24.403 16:40:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:24.403 16:40:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.403 16:40:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.662 16:40:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:24.662 "name": "raid_bdev1", 00:23:24.662 "uuid": "8d1a29a5-d29a-45a8-bd6b-009773fe3586", 00:23:24.662 "strip_size_kb": 64, 00:23:24.662 "state": "online", 00:23:24.662 "raid_level": "raid5f", 00:23:24.662 "superblock": false, 00:23:24.662 "num_base_bdevs": 3, 00:23:24.662 "num_base_bdevs_discovered": 3, 00:23:24.662 "num_base_bdevs_operational": 3, 00:23:24.662 "base_bdevs_list": [ 00:23:24.662 { 00:23:24.662 "name": "spare", 00:23:24.662 "uuid": "14530a32-b2eb-52cc-a7c4-8e1c80405678", 00:23:24.662 "is_configured": true, 00:23:24.662 "data_offset": 0, 00:23:24.662 "data_size": 65536 00:23:24.662 }, 00:23:24.662 { 00:23:24.662 "name": "BaseBdev2", 00:23:24.662 "uuid": "6d8c10f9-7c36-4814-a1fd-0f351c24e0f0", 00:23:24.662 "is_configured": true, 00:23:24.662 "data_offset": 0, 00:23:24.662 "data_size": 65536 00:23:24.662 }, 00:23:24.662 { 00:23:24.662 "name": "BaseBdev3", 00:23:24.662 "uuid": "ee87ffa7-67db-4ba7-b680-221872b34d27", 00:23:24.662 "is_configured": true, 00:23:24.662 "data_offset": 0, 00:23:24.662 "data_size": 65536 00:23:24.662 } 00:23:24.662 ] 00:23:24.662 }' 00:23:24.662 16:40:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:24.662 16:40:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:24.662 16:40:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:24.922 16:40:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:24.922 16:40:56 -- bdev/bdev_raid.sh@660 -- # break 00:23:24.922 16:40:56 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:24.922 16:40:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:24.922 16:40:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:24.922 16:40:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:24.922 16:40:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:24.922 16:40:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.922 16:40:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:25.181 "name": "raid_bdev1", 00:23:25.181 "uuid": "8d1a29a5-d29a-45a8-bd6b-009773fe3586", 00:23:25.181 "strip_size_kb": 64, 00:23:25.181 "state": "online", 00:23:25.181 "raid_level": "raid5f", 00:23:25.181 "superblock": false, 00:23:25.181 "num_base_bdevs": 3, 00:23:25.181 "num_base_bdevs_discovered": 3, 00:23:25.181 "num_base_bdevs_operational": 3, 00:23:25.181 "base_bdevs_list": [ 00:23:25.181 { 00:23:25.181 "name": "spare", 00:23:25.181 "uuid": "14530a32-b2eb-52cc-a7c4-8e1c80405678", 00:23:25.181 "is_configured": true, 00:23:25.181 "data_offset": 0, 00:23:25.181 "data_size": 65536 00:23:25.181 }, 00:23:25.181 { 00:23:25.181 "name": "BaseBdev2", 00:23:25.181 "uuid": "6d8c10f9-7c36-4814-a1fd-0f351c24e0f0", 00:23:25.181 "is_configured": true, 00:23:25.181 "data_offset": 0, 00:23:25.181 "data_size": 65536 00:23:25.181 }, 00:23:25.181 { 00:23:25.181 "name": "BaseBdev3", 00:23:25.181 "uuid": "ee87ffa7-67db-4ba7-b680-221872b34d27", 00:23:25.181 "is_configured": true, 00:23:25.181 "data_offset": 0, 00:23:25.181 "data_size": 65536 00:23:25.181 } 00:23:25.181 ] 00:23:25.181 }' 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.181 16:40:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.441 16:40:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:25.441 "name": "raid_bdev1", 00:23:25.441 "uuid": "8d1a29a5-d29a-45a8-bd6b-009773fe3586", 00:23:25.441 "strip_size_kb": 64, 00:23:25.441 "state": "online", 00:23:25.441 "raid_level": "raid5f", 00:23:25.441 "superblock": false, 00:23:25.441 "num_base_bdevs": 3, 00:23:25.441 "num_base_bdevs_discovered": 3, 00:23:25.441 "num_base_bdevs_operational": 3, 00:23:25.441 "base_bdevs_list": [ 00:23:25.441 { 00:23:25.441 "name": "spare", 00:23:25.441 "uuid": "14530a32-b2eb-52cc-a7c4-8e1c80405678", 00:23:25.441 "is_configured": true, 00:23:25.441 "data_offset": 0, 00:23:25.441 "data_size": 65536 00:23:25.441 }, 00:23:25.441 { 00:23:25.441 "name": "BaseBdev2", 00:23:25.441 "uuid": "6d8c10f9-7c36-4814-a1fd-0f351c24e0f0", 00:23:25.441 "is_configured": true, 00:23:25.441 "data_offset": 0, 00:23:25.441 "data_size": 65536 00:23:25.441 }, 00:23:25.441 { 00:23:25.441 "name": "BaseBdev3", 00:23:25.441 "uuid": "ee87ffa7-67db-4ba7-b680-221872b34d27", 00:23:25.441 "is_configured": true, 00:23:25.441 "data_offset": 0, 00:23:25.441 "data_size": 65536 00:23:25.441 } 00:23:25.441 ] 00:23:25.441 }' 00:23:25.441 16:40:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:25.441 16:40:56 -- common/autotest_common.sh@10 -- # set +x 00:23:26.010 16:40:57 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:26.010 [2024-07-13 16:40:57.476947] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:26.010 [2024-07-13 16:40:57.477237] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:26.010 [2024-07-13 16:40:57.477567] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:26.010 [2024-07-13 16:40:57.477773] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:26.010 [2024-07-13 16:40:57.477861] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:23:26.269 16:40:57 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:26.269 16:40:57 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.528 16:40:57 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:26.528 16:40:57 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:26.528 16:40:57 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:26.528 16:40:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:26.528 16:40:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:26.528 16:40:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:26.528 16:40:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:26.528 16:40:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:26.528 16:40:57 -- bdev/nbd_common.sh@12 -- # local i 00:23:26.528 16:40:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:26.528 16:40:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:26.528 16:40:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:26.529 /dev/nbd0 00:23:26.529 16:40:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:26.529 16:40:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:26.529 16:40:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:26.529 16:40:57 -- common/autotest_common.sh@857 -- # local i 00:23:26.529 16:40:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:26.529 16:40:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:26.529 16:40:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:26.529 16:40:57 -- common/autotest_common.sh@861 -- # break 00:23:26.529 16:40:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:26.529 16:40:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:26.529 16:40:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:26.529 1+0 records in 00:23:26.529 1+0 records out 00:23:26.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410022 s, 10.0 MB/s 00:23:26.529 16:40:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:26.787 16:40:58 -- common/autotest_common.sh@874 -- # size=4096 00:23:26.787 16:40:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:26.787 16:40:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:26.787 16:40:58 -- common/autotest_common.sh@877 -- # return 0 00:23:26.787 16:40:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:26.787 16:40:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:26.787 16:40:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:27.045 /dev/nbd1 00:23:27.045 16:40:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:27.045 16:40:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:27.045 16:40:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:27.045 16:40:58 -- common/autotest_common.sh@857 -- # local i 00:23:27.045 16:40:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:27.045 16:40:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:27.045 16:40:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:27.045 16:40:58 -- common/autotest_common.sh@861 -- # break 00:23:27.045 16:40:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:27.045 16:40:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:27.045 16:40:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:27.045 1+0 records in 00:23:27.045 1+0 records out 00:23:27.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718674 s, 5.7 MB/s 00:23:27.045 16:40:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:27.045 16:40:58 -- common/autotest_common.sh@874 -- # size=4096 00:23:27.045 16:40:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:27.045 16:40:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:27.045 16:40:58 -- common/autotest_common.sh@877 -- # return 0 00:23:27.045 16:40:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:27.045 16:40:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:27.045 16:40:58 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:27.045 16:40:58 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:27.045 16:40:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:27.045 16:40:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:27.045 16:40:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:27.045 16:40:58 -- bdev/nbd_common.sh@51 -- # local i 00:23:27.045 16:40:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:27.045 16:40:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:27.309 16:40:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:27.309 16:40:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:27.309 16:40:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:27.309 16:40:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:27.309 16:40:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:27.309 16:40:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:27.309 16:40:58 -- bdev/nbd_common.sh@41 -- # break 00:23:27.309 16:40:58 -- bdev/nbd_common.sh@45 -- # return 0 00:23:27.309 16:40:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:27.309 16:40:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:27.585 16:40:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:27.585 16:40:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:27.585 16:40:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:27.585 16:40:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:27.585 16:40:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:27.585 16:40:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:27.585 16:40:58 -- bdev/nbd_common.sh@41 -- # break 00:23:27.585 16:40:58 -- bdev/nbd_common.sh@45 -- # return 0 00:23:27.585 16:40:58 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:27.585 16:40:58 -- bdev/bdev_raid.sh@709 -- # killprocess 138816 00:23:27.585 16:40:58 -- common/autotest_common.sh@926 -- # '[' -z 138816 ']' 00:23:27.585 16:40:58 -- common/autotest_common.sh@930 -- # kill -0 138816 00:23:27.585 16:40:58 -- common/autotest_common.sh@931 -- # uname 00:23:27.585 16:40:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:27.585 16:40:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138816 00:23:27.585 16:40:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:27.585 16:40:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:27.585 16:40:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138816' 00:23:27.585 killing process with pid 138816 00:23:27.585 16:40:58 -- common/autotest_common.sh@945 -- # kill 138816 00:23:27.585 Received shutdown signal, test time was about 60.000000 seconds 00:23:27.585 00:23:27.585 Latency(us) 00:23:27.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.585 =================================================================================================================== 00:23:27.585 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:27.585 16:40:58 -- common/autotest_common.sh@950 -- # wait 138816 00:23:27.585 [2024-07-13 16:40:58.981096] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:27.843 [2024-07-13 16:40:59.058051] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:28.101 ************************************ 00:23:28.102 END TEST raid5f_rebuild_test 00:23:28.102 ************************************ 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:28.102 00:23:28.102 real 0m19.322s 00:23:28.102 user 0m28.357s 00:23:28.102 sys 0m3.268s 00:23:28.102 16:40:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:28.102 16:40:59 -- common/autotest_common.sh@10 -- # set +x 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:23:28.102 16:40:59 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:28.102 16:40:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:28.102 16:40:59 -- common/autotest_common.sh@10 -- # set +x 00:23:28.102 ************************************ 00:23:28.102 START TEST raid5f_rebuild_test_sb 00:23:28.102 ************************************ 00:23:28.102 16:40:59 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@544 -- # raid_pid=139341 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:28.102 16:40:59 -- bdev/bdev_raid.sh@545 -- # waitforlisten 139341 /var/tmp/spdk-raid.sock 00:23:28.102 16:40:59 -- common/autotest_common.sh@819 -- # '[' -z 139341 ']' 00:23:28.102 16:40:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:28.102 16:40:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:28.102 16:40:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:28.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:28.102 16:40:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:28.102 16:40:59 -- common/autotest_common.sh@10 -- # set +x 00:23:28.361 [2024-07-13 16:40:59.621034] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:23:28.361 [2024-07-13 16:40:59.622014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139341 ] 00:23:28.361 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:28.361 Zero copy mechanism will not be used. 00:23:28.361 [2024-07-13 16:40:59.787216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.619 [2024-07-13 16:40:59.868736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.619 [2024-07-13 16:40:59.948835] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:29.185 16:41:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:29.186 16:41:00 -- common/autotest_common.sh@852 -- # return 0 00:23:29.186 16:41:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:29.186 16:41:00 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:29.186 16:41:00 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:29.444 BaseBdev1_malloc 00:23:29.444 16:41:00 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:29.703 [2024-07-13 16:41:01.050279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:29.703 [2024-07-13 16:41:01.050677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.703 [2024-07-13 16:41:01.050751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:23:29.703 [2024-07-13 16:41:01.050903] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.703 [2024-07-13 16:41:01.054194] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.703 [2024-07-13 16:41:01.054401] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:29.703 BaseBdev1 00:23:29.703 16:41:01 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:29.703 16:41:01 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:29.703 16:41:01 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:29.963 BaseBdev2_malloc 00:23:29.963 16:41:01 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:30.222 [2024-07-13 16:41:01.454629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:30.222 [2024-07-13 16:41:01.455004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.222 [2024-07-13 16:41:01.455084] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:23:30.222 [2024-07-13 16:41:01.455219] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.222 [2024-07-13 16:41:01.458090] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.222 [2024-07-13 16:41:01.458264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:30.222 BaseBdev2 00:23:30.222 16:41:01 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:30.222 16:41:01 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:30.222 16:41:01 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:30.222 BaseBdev3_malloc 00:23:30.482 16:41:01 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:30.482 [2024-07-13 16:41:01.876389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:30.482 [2024-07-13 16:41:01.876707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.482 [2024-07-13 16:41:01.876855] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:30.482 [2024-07-13 16:41:01.876993] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.482 [2024-07-13 16:41:01.879865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.482 [2024-07-13 16:41:01.880040] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:30.482 BaseBdev3 00:23:30.482 16:41:01 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:30.742 spare_malloc 00:23:30.742 16:41:02 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:31.001 spare_delay 00:23:31.001 16:41:02 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:31.261 [2024-07-13 16:41:02.536921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:31.261 [2024-07-13 16:41:02.537338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.261 [2024-07-13 16:41:02.537422] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:31.261 [2024-07-13 16:41:02.537577] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.261 [2024-07-13 16:41:02.540593] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.261 [2024-07-13 16:41:02.540790] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:31.261 spare 00:23:31.261 16:41:02 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:31.520 [2024-07-13 16:41:02.733302] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:31.520 [2024-07-13 16:41:02.736125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:31.520 [2024-07-13 16:41:02.736383] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:31.520 [2024-07-13 16:41:02.736655] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:31.520 [2024-07-13 16:41:02.736700] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:31.520 [2024-07-13 16:41:02.737043] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:23:31.520 [2024-07-13 16:41:02.738011] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:31.520 [2024-07-13 16:41:02.738132] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:31.520 [2024-07-13 16:41:02.738424] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.520 16:41:02 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:31.520 16:41:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:31.520 16:41:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:31.520 16:41:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:31.520 16:41:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:31.520 16:41:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:31.520 16:41:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:31.520 16:41:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:31.520 16:41:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:31.520 16:41:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:31.520 16:41:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.520 16:41:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.779 16:41:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:31.779 "name": "raid_bdev1", 00:23:31.779 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:31.779 "strip_size_kb": 64, 00:23:31.779 "state": "online", 00:23:31.779 "raid_level": "raid5f", 00:23:31.779 "superblock": true, 00:23:31.779 "num_base_bdevs": 3, 00:23:31.779 "num_base_bdevs_discovered": 3, 00:23:31.779 "num_base_bdevs_operational": 3, 00:23:31.779 "base_bdevs_list": [ 00:23:31.779 { 00:23:31.779 "name": "BaseBdev1", 00:23:31.779 "uuid": "b109593b-b5ee-55de-8a8c-bda8583c0e97", 00:23:31.779 "is_configured": true, 00:23:31.779 "data_offset": 2048, 00:23:31.779 "data_size": 63488 00:23:31.779 }, 00:23:31.779 { 00:23:31.779 "name": "BaseBdev2", 00:23:31.779 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:31.779 "is_configured": true, 00:23:31.779 "data_offset": 2048, 00:23:31.779 "data_size": 63488 00:23:31.779 }, 00:23:31.779 { 00:23:31.779 "name": "BaseBdev3", 00:23:31.779 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:31.779 "is_configured": true, 00:23:31.779 "data_offset": 2048, 00:23:31.779 "data_size": 63488 00:23:31.779 } 00:23:31.779 ] 00:23:31.779 }' 00:23:31.779 16:41:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:31.779 16:41:03 -- common/autotest_common.sh@10 -- # set +x 00:23:32.346 16:41:03 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:32.346 16:41:03 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:32.346 [2024-07-13 16:41:03.750784] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:32.346 16:41:03 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:23:32.346 16:41:03 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:32.346 16:41:03 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.605 16:41:03 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:32.605 16:41:03 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:32.605 16:41:03 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:32.605 16:41:03 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:32.605 16:41:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:32.605 16:41:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:32.605 16:41:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:32.605 16:41:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:32.605 16:41:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:32.605 16:41:03 -- bdev/nbd_common.sh@12 -- # local i 00:23:32.605 16:41:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:32.605 16:41:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:32.605 16:41:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:32.865 [2024-07-13 16:41:04.194774] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:23:32.865 /dev/nbd0 00:23:32.865 16:41:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:32.865 16:41:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:32.865 16:41:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:32.865 16:41:04 -- common/autotest_common.sh@857 -- # local i 00:23:32.865 16:41:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:32.865 16:41:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:32.865 16:41:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:32.865 16:41:04 -- common/autotest_common.sh@861 -- # break 00:23:32.865 16:41:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:32.865 16:41:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:32.865 16:41:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:32.865 1+0 records in 00:23:32.865 1+0 records out 00:23:32.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678514 s, 6.0 MB/s 00:23:32.865 16:41:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:32.865 16:41:04 -- common/autotest_common.sh@874 -- # size=4096 00:23:32.865 16:41:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:32.865 16:41:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:32.865 16:41:04 -- common/autotest_common.sh@877 -- # return 0 00:23:32.865 16:41:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:32.865 16:41:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:32.865 16:41:04 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:32.865 16:41:04 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:32.865 16:41:04 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:32.865 16:41:04 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:23:33.124 496+0 records in 00:23:33.124 496+0 records out 00:23:33.383 65011712 bytes (65 MB, 62 MiB) copied, 0.319003 s, 204 MB/s 00:23:33.383 16:41:04 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@51 -- # local i 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:33.383 [2024-07-13 16:41:04.805585] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@41 -- # break 00:23:33.383 16:41:04 -- bdev/nbd_common.sh@45 -- # return 0 00:23:33.383 16:41:04 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:33.642 [2024-07-13 16:41:04.997229] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:33.642 16:41:05 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:33.642 16:41:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:33.642 16:41:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:33.642 16:41:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:33.642 16:41:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:33.642 16:41:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:33.642 16:41:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:33.642 16:41:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:33.642 16:41:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:33.642 16:41:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:33.642 16:41:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.642 16:41:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.900 16:41:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.900 "name": "raid_bdev1", 00:23:33.900 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:33.900 "strip_size_kb": 64, 00:23:33.900 "state": "online", 00:23:33.900 "raid_level": "raid5f", 00:23:33.900 "superblock": true, 00:23:33.900 "num_base_bdevs": 3, 00:23:33.900 "num_base_bdevs_discovered": 2, 00:23:33.900 "num_base_bdevs_operational": 2, 00:23:33.900 "base_bdevs_list": [ 00:23:33.900 { 00:23:33.900 "name": null, 00:23:33.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.900 "is_configured": false, 00:23:33.900 "data_offset": 2048, 00:23:33.900 "data_size": 63488 00:23:33.900 }, 00:23:33.900 { 00:23:33.900 "name": "BaseBdev2", 00:23:33.900 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:33.900 "is_configured": true, 00:23:33.900 "data_offset": 2048, 00:23:33.900 "data_size": 63488 00:23:33.900 }, 00:23:33.900 { 00:23:33.900 "name": "BaseBdev3", 00:23:33.900 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:33.901 "is_configured": true, 00:23:33.901 "data_offset": 2048, 00:23:33.901 "data_size": 63488 00:23:33.901 } 00:23:33.901 ] 00:23:33.901 }' 00:23:33.901 16:41:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.901 16:41:05 -- common/autotest_common.sh@10 -- # set +x 00:23:34.469 16:41:05 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:34.728 [2024-07-13 16:41:06.013445] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:34.728 [2024-07-13 16:41:06.013821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:34.728 [2024-07-13 16:41:06.020965] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000025500 00:23:34.728 [2024-07-13 16:41:06.024274] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:34.728 16:41:06 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:35.666 16:41:07 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.666 16:41:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:35.666 16:41:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:35.666 16:41:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:35.666 16:41:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:35.666 16:41:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.666 16:41:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.925 16:41:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:35.925 "name": "raid_bdev1", 00:23:35.925 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:35.925 "strip_size_kb": 64, 00:23:35.925 "state": "online", 00:23:35.925 "raid_level": "raid5f", 00:23:35.925 "superblock": true, 00:23:35.925 "num_base_bdevs": 3, 00:23:35.925 "num_base_bdevs_discovered": 3, 00:23:35.925 "num_base_bdevs_operational": 3, 00:23:35.925 "process": { 00:23:35.925 "type": "rebuild", 00:23:35.925 "target": "spare", 00:23:35.925 "progress": { 00:23:35.925 "blocks": 24576, 00:23:35.925 "percent": 19 00:23:35.925 } 00:23:35.925 }, 00:23:35.925 "base_bdevs_list": [ 00:23:35.925 { 00:23:35.925 "name": "spare", 00:23:35.925 "uuid": "1d8fe2c1-d028-5d8d-be94-9c67e48c24e4", 00:23:35.925 "is_configured": true, 00:23:35.925 "data_offset": 2048, 00:23:35.925 "data_size": 63488 00:23:35.925 }, 00:23:35.925 { 00:23:35.925 "name": "BaseBdev2", 00:23:35.925 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:35.925 "is_configured": true, 00:23:35.925 "data_offset": 2048, 00:23:35.925 "data_size": 63488 00:23:35.925 }, 00:23:35.925 { 00:23:35.925 "name": "BaseBdev3", 00:23:35.925 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:35.925 "is_configured": true, 00:23:35.925 "data_offset": 2048, 00:23:35.925 "data_size": 63488 00:23:35.925 } 00:23:35.925 ] 00:23:35.925 }' 00:23:35.925 16:41:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:35.925 16:41:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:35.925 16:41:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:36.185 16:41:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.185 16:41:07 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:36.185 [2024-07-13 16:41:07.573726] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:36.185 [2024-07-13 16:41:07.641701] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:36.185 [2024-07-13 16:41:07.642033] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:36.445 "name": "raid_bdev1", 00:23:36.445 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:36.445 "strip_size_kb": 64, 00:23:36.445 "state": "online", 00:23:36.445 "raid_level": "raid5f", 00:23:36.445 "superblock": true, 00:23:36.445 "num_base_bdevs": 3, 00:23:36.445 "num_base_bdevs_discovered": 2, 00:23:36.445 "num_base_bdevs_operational": 2, 00:23:36.445 "base_bdevs_list": [ 00:23:36.445 { 00:23:36.445 "name": null, 00:23:36.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.445 "is_configured": false, 00:23:36.445 "data_offset": 2048, 00:23:36.445 "data_size": 63488 00:23:36.445 }, 00:23:36.445 { 00:23:36.445 "name": "BaseBdev2", 00:23:36.445 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:36.445 "is_configured": true, 00:23:36.445 "data_offset": 2048, 00:23:36.445 "data_size": 63488 00:23:36.445 }, 00:23:36.445 { 00:23:36.445 "name": "BaseBdev3", 00:23:36.445 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:36.445 "is_configured": true, 00:23:36.445 "data_offset": 2048, 00:23:36.445 "data_size": 63488 00:23:36.445 } 00:23:36.445 ] 00:23:36.445 }' 00:23:36.445 16:41:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:36.445 16:41:07 -- common/autotest_common.sh@10 -- # set +x 00:23:37.014 16:41:08 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:37.014 16:41:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:37.014 16:41:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:37.014 16:41:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:37.014 16:41:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:37.273 16:41:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.273 16:41:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.274 16:41:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:37.274 "name": "raid_bdev1", 00:23:37.274 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:37.274 "strip_size_kb": 64, 00:23:37.274 "state": "online", 00:23:37.274 "raid_level": "raid5f", 00:23:37.274 "superblock": true, 00:23:37.274 "num_base_bdevs": 3, 00:23:37.274 "num_base_bdevs_discovered": 2, 00:23:37.274 "num_base_bdevs_operational": 2, 00:23:37.274 "base_bdevs_list": [ 00:23:37.274 { 00:23:37.274 "name": null, 00:23:37.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.274 "is_configured": false, 00:23:37.274 "data_offset": 2048, 00:23:37.274 "data_size": 63488 00:23:37.274 }, 00:23:37.274 { 00:23:37.274 "name": "BaseBdev2", 00:23:37.274 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:37.274 "is_configured": true, 00:23:37.274 "data_offset": 2048, 00:23:37.274 "data_size": 63488 00:23:37.274 }, 00:23:37.274 { 00:23:37.274 "name": "BaseBdev3", 00:23:37.274 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:37.274 "is_configured": true, 00:23:37.274 "data_offset": 2048, 00:23:37.274 "data_size": 63488 00:23:37.274 } 00:23:37.274 ] 00:23:37.274 }' 00:23:37.274 16:41:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:37.274 16:41:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:37.274 16:41:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:37.533 16:41:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:37.533 16:41:08 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:37.533 [2024-07-13 16:41:08.984398] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:37.533 [2024-07-13 16:41:08.984732] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:37.533 [2024-07-13 16:41:08.991655] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:23:37.533 [2024-07-13 16:41:08.994537] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:37.792 16:41:09 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:38.740 16:41:10 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.740 16:41:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:38.740 16:41:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:38.740 16:41:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:38.740 16:41:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:38.740 16:41:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.740 16:41:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:38.998 "name": "raid_bdev1", 00:23:38.998 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:38.998 "strip_size_kb": 64, 00:23:38.998 "state": "online", 00:23:38.998 "raid_level": "raid5f", 00:23:38.998 "superblock": true, 00:23:38.998 "num_base_bdevs": 3, 00:23:38.998 "num_base_bdevs_discovered": 3, 00:23:38.998 "num_base_bdevs_operational": 3, 00:23:38.998 "process": { 00:23:38.998 "type": "rebuild", 00:23:38.998 "target": "spare", 00:23:38.998 "progress": { 00:23:38.998 "blocks": 24576, 00:23:38.998 "percent": 19 00:23:38.998 } 00:23:38.998 }, 00:23:38.998 "base_bdevs_list": [ 00:23:38.998 { 00:23:38.998 "name": "spare", 00:23:38.998 "uuid": "1d8fe2c1-d028-5d8d-be94-9c67e48c24e4", 00:23:38.998 "is_configured": true, 00:23:38.998 "data_offset": 2048, 00:23:38.998 "data_size": 63488 00:23:38.998 }, 00:23:38.998 { 00:23:38.998 "name": "BaseBdev2", 00:23:38.998 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:38.998 "is_configured": true, 00:23:38.998 "data_offset": 2048, 00:23:38.998 "data_size": 63488 00:23:38.998 }, 00:23:38.998 { 00:23:38.998 "name": "BaseBdev3", 00:23:38.998 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:38.998 "is_configured": true, 00:23:38.998 "data_offset": 2048, 00:23:38.998 "data_size": 63488 00:23:38.998 } 00:23:38.998 ] 00:23:38.998 }' 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:38.998 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@657 -- # local timeout=606 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.998 16:41:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.256 16:41:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:39.256 "name": "raid_bdev1", 00:23:39.256 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:39.256 "strip_size_kb": 64, 00:23:39.256 "state": "online", 00:23:39.256 "raid_level": "raid5f", 00:23:39.256 "superblock": true, 00:23:39.256 "num_base_bdevs": 3, 00:23:39.256 "num_base_bdevs_discovered": 3, 00:23:39.256 "num_base_bdevs_operational": 3, 00:23:39.256 "process": { 00:23:39.256 "type": "rebuild", 00:23:39.256 "target": "spare", 00:23:39.256 "progress": { 00:23:39.256 "blocks": 30720, 00:23:39.256 "percent": 24 00:23:39.256 } 00:23:39.256 }, 00:23:39.256 "base_bdevs_list": [ 00:23:39.256 { 00:23:39.256 "name": "spare", 00:23:39.256 "uuid": "1d8fe2c1-d028-5d8d-be94-9c67e48c24e4", 00:23:39.256 "is_configured": true, 00:23:39.256 "data_offset": 2048, 00:23:39.256 "data_size": 63488 00:23:39.256 }, 00:23:39.256 { 00:23:39.256 "name": "BaseBdev2", 00:23:39.256 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:39.256 "is_configured": true, 00:23:39.256 "data_offset": 2048, 00:23:39.256 "data_size": 63488 00:23:39.256 }, 00:23:39.256 { 00:23:39.256 "name": "BaseBdev3", 00:23:39.256 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:39.256 "is_configured": true, 00:23:39.256 "data_offset": 2048, 00:23:39.256 "data_size": 63488 00:23:39.256 } 00:23:39.256 ] 00:23:39.256 }' 00:23:39.256 16:41:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:39.256 16:41:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.256 16:41:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:39.514 16:41:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.514 16:41:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:40.453 16:41:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:40.453 16:41:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.453 16:41:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:40.453 16:41:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:40.453 16:41:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:40.453 16:41:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:40.453 16:41:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.453 16:41:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.712 16:41:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:40.712 "name": "raid_bdev1", 00:23:40.712 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:40.712 "strip_size_kb": 64, 00:23:40.712 "state": "online", 00:23:40.712 "raid_level": "raid5f", 00:23:40.712 "superblock": true, 00:23:40.712 "num_base_bdevs": 3, 00:23:40.712 "num_base_bdevs_discovered": 3, 00:23:40.712 "num_base_bdevs_operational": 3, 00:23:40.712 "process": { 00:23:40.712 "type": "rebuild", 00:23:40.712 "target": "spare", 00:23:40.712 "progress": { 00:23:40.712 "blocks": 59392, 00:23:40.712 "percent": 46 00:23:40.712 } 00:23:40.712 }, 00:23:40.712 "base_bdevs_list": [ 00:23:40.712 { 00:23:40.712 "name": "spare", 00:23:40.712 "uuid": "1d8fe2c1-d028-5d8d-be94-9c67e48c24e4", 00:23:40.712 "is_configured": true, 00:23:40.712 "data_offset": 2048, 00:23:40.712 "data_size": 63488 00:23:40.712 }, 00:23:40.712 { 00:23:40.712 "name": "BaseBdev2", 00:23:40.712 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:40.712 "is_configured": true, 00:23:40.712 "data_offset": 2048, 00:23:40.712 "data_size": 63488 00:23:40.712 }, 00:23:40.712 { 00:23:40.712 "name": "BaseBdev3", 00:23:40.712 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:40.712 "is_configured": true, 00:23:40.712 "data_offset": 2048, 00:23:40.712 "data_size": 63488 00:23:40.712 } 00:23:40.712 ] 00:23:40.712 }' 00:23:40.712 16:41:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:40.712 16:41:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:40.712 16:41:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:40.712 16:41:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:40.712 16:41:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:41.650 16:41:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:41.650 16:41:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.650 16:41:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:41.650 16:41:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:41.650 16:41:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:41.650 16:41:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:41.650 16:41:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.650 16:41:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.909 16:41:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:41.909 "name": "raid_bdev1", 00:23:41.909 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:41.909 "strip_size_kb": 64, 00:23:41.909 "state": "online", 00:23:41.909 "raid_level": "raid5f", 00:23:41.909 "superblock": true, 00:23:41.909 "num_base_bdevs": 3, 00:23:41.909 "num_base_bdevs_discovered": 3, 00:23:41.909 "num_base_bdevs_operational": 3, 00:23:41.909 "process": { 00:23:41.909 "type": "rebuild", 00:23:41.909 "target": "spare", 00:23:41.909 "progress": { 00:23:41.909 "blocks": 86016, 00:23:41.909 "percent": 67 00:23:41.909 } 00:23:41.909 }, 00:23:41.909 "base_bdevs_list": [ 00:23:41.909 { 00:23:41.909 "name": "spare", 00:23:41.909 "uuid": "1d8fe2c1-d028-5d8d-be94-9c67e48c24e4", 00:23:41.909 "is_configured": true, 00:23:41.909 "data_offset": 2048, 00:23:41.909 "data_size": 63488 00:23:41.909 }, 00:23:41.909 { 00:23:41.909 "name": "BaseBdev2", 00:23:41.909 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:41.909 "is_configured": true, 00:23:41.909 "data_offset": 2048, 00:23:41.909 "data_size": 63488 00:23:41.909 }, 00:23:41.909 { 00:23:41.909 "name": "BaseBdev3", 00:23:41.909 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:41.909 "is_configured": true, 00:23:41.909 "data_offset": 2048, 00:23:41.909 "data_size": 63488 00:23:41.909 } 00:23:41.909 ] 00:23:41.909 }' 00:23:41.909 16:41:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:42.167 16:41:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.167 16:41:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:42.167 16:41:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.167 16:41:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:43.101 16:41:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:43.101 16:41:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.101 16:41:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:43.101 16:41:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:43.101 16:41:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:43.101 16:41:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:43.101 16:41:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.101 16:41:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.360 16:41:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:43.360 "name": "raid_bdev1", 00:23:43.360 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:43.360 "strip_size_kb": 64, 00:23:43.360 "state": "online", 00:23:43.360 "raid_level": "raid5f", 00:23:43.360 "superblock": true, 00:23:43.360 "num_base_bdevs": 3, 00:23:43.360 "num_base_bdevs_discovered": 3, 00:23:43.360 "num_base_bdevs_operational": 3, 00:23:43.360 "process": { 00:23:43.360 "type": "rebuild", 00:23:43.360 "target": "spare", 00:23:43.360 "progress": { 00:23:43.360 "blocks": 114688, 00:23:43.360 "percent": 90 00:23:43.360 } 00:23:43.360 }, 00:23:43.360 "base_bdevs_list": [ 00:23:43.360 { 00:23:43.360 "name": "spare", 00:23:43.360 "uuid": "1d8fe2c1-d028-5d8d-be94-9c67e48c24e4", 00:23:43.360 "is_configured": true, 00:23:43.360 "data_offset": 2048, 00:23:43.360 "data_size": 63488 00:23:43.360 }, 00:23:43.360 { 00:23:43.360 "name": "BaseBdev2", 00:23:43.360 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:43.360 "is_configured": true, 00:23:43.360 "data_offset": 2048, 00:23:43.360 "data_size": 63488 00:23:43.360 }, 00:23:43.360 { 00:23:43.360 "name": "BaseBdev3", 00:23:43.360 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:43.360 "is_configured": true, 00:23:43.360 "data_offset": 2048, 00:23:43.360 "data_size": 63488 00:23:43.360 } 00:23:43.360 ] 00:23:43.360 }' 00:23:43.360 16:41:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:43.360 16:41:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.360 16:41:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:43.360 16:41:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.360 16:41:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:43.928 [2024-07-13 16:41:15.260960] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:43.928 [2024-07-13 16:41:15.261333] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:43.928 [2024-07-13 16:41:15.261664] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.494 16:41:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:44.494 16:41:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.494 16:41:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:44.494 16:41:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:44.494 16:41:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:44.494 16:41:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:44.494 16:41:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.494 16:41:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.753 16:41:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:44.753 "name": "raid_bdev1", 00:23:44.753 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:44.753 "strip_size_kb": 64, 00:23:44.753 "state": "online", 00:23:44.753 "raid_level": "raid5f", 00:23:44.753 "superblock": true, 00:23:44.753 "num_base_bdevs": 3, 00:23:44.753 "num_base_bdevs_discovered": 3, 00:23:44.753 "num_base_bdevs_operational": 3, 00:23:44.753 "base_bdevs_list": [ 00:23:44.753 { 00:23:44.753 "name": "spare", 00:23:44.753 "uuid": "1d8fe2c1-d028-5d8d-be94-9c67e48c24e4", 00:23:44.753 "is_configured": true, 00:23:44.753 "data_offset": 2048, 00:23:44.753 "data_size": 63488 00:23:44.753 }, 00:23:44.753 { 00:23:44.753 "name": "BaseBdev2", 00:23:44.753 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:44.753 "is_configured": true, 00:23:44.753 "data_offset": 2048, 00:23:44.753 "data_size": 63488 00:23:44.753 }, 00:23:44.753 { 00:23:44.753 "name": "BaseBdev3", 00:23:44.753 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:44.753 "is_configured": true, 00:23:44.753 "data_offset": 2048, 00:23:44.753 "data_size": 63488 00:23:44.753 } 00:23:44.753 ] 00:23:44.753 }' 00:23:44.753 16:41:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:44.753 16:41:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:44.753 16:41:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:44.753 16:41:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:44.753 16:41:16 -- bdev/bdev_raid.sh@660 -- # break 00:23:44.753 16:41:16 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:44.753 16:41:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:44.753 16:41:16 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:44.753 16:41:16 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:44.753 16:41:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:44.753 16:41:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.753 16:41:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.011 16:41:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:45.011 "name": "raid_bdev1", 00:23:45.011 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:45.011 "strip_size_kb": 64, 00:23:45.011 "state": "online", 00:23:45.011 "raid_level": "raid5f", 00:23:45.011 "superblock": true, 00:23:45.011 "num_base_bdevs": 3, 00:23:45.011 "num_base_bdevs_discovered": 3, 00:23:45.011 "num_base_bdevs_operational": 3, 00:23:45.011 "base_bdevs_list": [ 00:23:45.011 { 00:23:45.011 "name": "spare", 00:23:45.011 "uuid": "1d8fe2c1-d028-5d8d-be94-9c67e48c24e4", 00:23:45.011 "is_configured": true, 00:23:45.011 "data_offset": 2048, 00:23:45.011 "data_size": 63488 00:23:45.011 }, 00:23:45.011 { 00:23:45.011 "name": "BaseBdev2", 00:23:45.011 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:45.011 "is_configured": true, 00:23:45.011 "data_offset": 2048, 00:23:45.011 "data_size": 63488 00:23:45.011 }, 00:23:45.011 { 00:23:45.011 "name": "BaseBdev3", 00:23:45.011 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:45.011 "is_configured": true, 00:23:45.011 "data_offset": 2048, 00:23:45.011 "data_size": 63488 00:23:45.011 } 00:23:45.011 ] 00:23:45.011 }' 00:23:45.011 16:41:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:45.011 16:41:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:45.011 16:41:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:45.270 "name": "raid_bdev1", 00:23:45.270 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:45.270 "strip_size_kb": 64, 00:23:45.270 "state": "online", 00:23:45.270 "raid_level": "raid5f", 00:23:45.270 "superblock": true, 00:23:45.270 "num_base_bdevs": 3, 00:23:45.270 "num_base_bdevs_discovered": 3, 00:23:45.270 "num_base_bdevs_operational": 3, 00:23:45.270 "base_bdevs_list": [ 00:23:45.270 { 00:23:45.270 "name": "spare", 00:23:45.270 "uuid": "1d8fe2c1-d028-5d8d-be94-9c67e48c24e4", 00:23:45.270 "is_configured": true, 00:23:45.270 "data_offset": 2048, 00:23:45.270 "data_size": 63488 00:23:45.270 }, 00:23:45.270 { 00:23:45.270 "name": "BaseBdev2", 00:23:45.270 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:45.270 "is_configured": true, 00:23:45.270 "data_offset": 2048, 00:23:45.270 "data_size": 63488 00:23:45.270 }, 00:23:45.270 { 00:23:45.270 "name": "BaseBdev3", 00:23:45.270 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:45.270 "is_configured": true, 00:23:45.270 "data_offset": 2048, 00:23:45.270 "data_size": 63488 00:23:45.270 } 00:23:45.270 ] 00:23:45.270 }' 00:23:45.270 16:41:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:45.270 16:41:16 -- common/autotest_common.sh@10 -- # set +x 00:23:45.837 16:41:17 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:46.096 [2024-07-13 16:41:17.544776] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:46.096 [2024-07-13 16:41:17.545085] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:46.096 [2024-07-13 16:41:17.545348] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.096 [2024-07-13 16:41:17.545490] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.096 [2024-07-13 16:41:17.545662] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:23:46.355 16:41:17 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.355 16:41:17 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:46.355 16:41:17 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:46.355 16:41:17 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:46.355 16:41:17 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:46.355 16:41:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:46.355 16:41:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:46.355 16:41:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:46.355 16:41:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:46.355 16:41:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:46.355 16:41:17 -- bdev/nbd_common.sh@12 -- # local i 00:23:46.355 16:41:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:46.355 16:41:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:46.355 16:41:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:47.026 /dev/nbd0 00:23:47.026 16:41:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:47.026 16:41:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:47.026 16:41:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:47.026 16:41:18 -- common/autotest_common.sh@857 -- # local i 00:23:47.026 16:41:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:47.026 16:41:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:47.026 16:41:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:47.026 16:41:18 -- common/autotest_common.sh@861 -- # break 00:23:47.026 16:41:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:47.026 16:41:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:47.026 16:41:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:47.026 1+0 records in 00:23:47.026 1+0 records out 00:23:47.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477617 s, 8.6 MB/s 00:23:47.026 16:41:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:47.026 16:41:18 -- common/autotest_common.sh@874 -- # size=4096 00:23:47.026 16:41:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:47.026 16:41:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:47.026 16:41:18 -- common/autotest_common.sh@877 -- # return 0 00:23:47.026 16:41:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:47.026 16:41:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:47.026 16:41:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:47.026 /dev/nbd1 00:23:47.026 16:41:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:47.026 16:41:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:47.026 16:41:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:47.026 16:41:18 -- common/autotest_common.sh@857 -- # local i 00:23:47.026 16:41:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:47.026 16:41:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:47.026 16:41:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:47.285 16:41:18 -- common/autotest_common.sh@861 -- # break 00:23:47.285 16:41:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:47.285 16:41:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:47.285 16:41:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:47.285 1+0 records in 00:23:47.285 1+0 records out 00:23:47.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055557 s, 7.4 MB/s 00:23:47.285 16:41:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:47.285 16:41:18 -- common/autotest_common.sh@874 -- # size=4096 00:23:47.285 16:41:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:47.285 16:41:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:47.285 16:41:18 -- common/autotest_common.sh@877 -- # return 0 00:23:47.285 16:41:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:47.285 16:41:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:47.285 16:41:18 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:47.285 16:41:18 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:47.285 16:41:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:47.285 16:41:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:47.285 16:41:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:47.285 16:41:18 -- bdev/nbd_common.sh@51 -- # local i 00:23:47.285 16:41:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:47.285 16:41:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:47.544 16:41:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:47.544 16:41:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:47.544 16:41:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:47.544 16:41:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:47.544 16:41:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:47.544 16:41:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:47.544 16:41:18 -- bdev/nbd_common.sh@41 -- # break 00:23:47.544 16:41:18 -- bdev/nbd_common.sh@45 -- # return 0 00:23:47.544 16:41:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:47.544 16:41:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:47.803 16:41:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:47.803 16:41:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:47.803 16:41:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:47.803 16:41:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:47.803 16:41:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:47.803 16:41:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:47.803 16:41:19 -- bdev/nbd_common.sh@41 -- # break 00:23:47.803 16:41:19 -- bdev/nbd_common.sh@45 -- # return 0 00:23:47.803 16:41:19 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:47.803 16:41:19 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:47.803 16:41:19 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:47.803 16:41:19 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:48.062 16:41:19 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:48.062 [2024-07-13 16:41:19.485760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:48.062 [2024-07-13 16:41:19.486175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.062 [2024-07-13 16:41:19.486300] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:48.062 [2024-07-13 16:41:19.486444] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.062 [2024-07-13 16:41:19.489360] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.062 [2024-07-13 16:41:19.489590] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:48.062 [2024-07-13 16:41:19.489792] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:48.062 [2024-07-13 16:41:19.489963] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:48.062 BaseBdev1 00:23:48.062 16:41:19 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:48.062 16:41:19 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:23:48.062 16:41:19 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:23:48.322 16:41:19 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:48.581 [2024-07-13 16:41:19.901945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:48.581 [2024-07-13 16:41:19.902336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.581 [2024-07-13 16:41:19.902424] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:48.581 [2024-07-13 16:41:19.902523] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.581 [2024-07-13 16:41:19.903130] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.581 [2024-07-13 16:41:19.903287] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:48.581 [2024-07-13 16:41:19.903463] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:23:48.581 [2024-07-13 16:41:19.903539] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:23:48.581 [2024-07-13 16:41:19.903603] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:48.581 [2024-07-13 16:41:19.903680] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:23:48.581 [2024-07-13 16:41:19.903797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:48.581 BaseBdev2 00:23:48.581 16:41:19 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:48.581 16:41:19 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:23:48.581 16:41:19 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:23:48.840 16:41:20 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:49.099 [2024-07-13 16:41:20.318054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:49.099 [2024-07-13 16:41:20.318418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.099 [2024-07-13 16:41:20.318503] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:49.099 [2024-07-13 16:41:20.318672] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.099 [2024-07-13 16:41:20.319230] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.099 [2024-07-13 16:41:20.319407] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:49.099 [2024-07-13 16:41:20.319589] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:23:49.099 [2024-07-13 16:41:20.319696] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:49.099 BaseBdev3 00:23:49.099 16:41:20 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:49.099 16:41:20 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:49.358 [2024-07-13 16:41:20.710151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:49.358 [2024-07-13 16:41:20.710518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.358 [2024-07-13 16:41:20.710603] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:49.358 [2024-07-13 16:41:20.710707] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.358 [2024-07-13 16:41:20.711303] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.358 [2024-07-13 16:41:20.711469] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:49.358 [2024-07-13 16:41:20.711677] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:49.358 [2024-07-13 16:41:20.711811] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:49.358 spare 00:23:49.358 16:41:20 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:49.358 16:41:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:49.358 16:41:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:49.358 16:41:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:49.358 16:41:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:49.358 16:41:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:49.358 16:41:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:49.358 16:41:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:49.358 16:41:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:49.358 16:41:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:49.358 16:41:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.358 16:41:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.358 [2024-07-13 16:41:20.811987] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:23:49.358 [2024-07-13 16:41:20.812250] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:49.358 [2024-07-13 16:41:20.812558] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000044230 00:23:49.358 [2024-07-13 16:41:20.813466] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:23:49.358 [2024-07-13 16:41:20.813586] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:23:49.358 [2024-07-13 16:41:20.813863] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:49.617 16:41:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:49.617 "name": "raid_bdev1", 00:23:49.617 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:49.617 "strip_size_kb": 64, 00:23:49.617 "state": "online", 00:23:49.617 "raid_level": "raid5f", 00:23:49.617 "superblock": true, 00:23:49.617 "num_base_bdevs": 3, 00:23:49.617 "num_base_bdevs_discovered": 3, 00:23:49.617 "num_base_bdevs_operational": 3, 00:23:49.617 "base_bdevs_list": [ 00:23:49.617 { 00:23:49.617 "name": "spare", 00:23:49.617 "uuid": "1d8fe2c1-d028-5d8d-be94-9c67e48c24e4", 00:23:49.617 "is_configured": true, 00:23:49.617 "data_offset": 2048, 00:23:49.617 "data_size": 63488 00:23:49.617 }, 00:23:49.617 { 00:23:49.617 "name": "BaseBdev2", 00:23:49.617 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:49.617 "is_configured": true, 00:23:49.617 "data_offset": 2048, 00:23:49.617 "data_size": 63488 00:23:49.617 }, 00:23:49.617 { 00:23:49.617 "name": "BaseBdev3", 00:23:49.617 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:49.617 "is_configured": true, 00:23:49.617 "data_offset": 2048, 00:23:49.617 "data_size": 63488 00:23:49.617 } 00:23:49.617 ] 00:23:49.617 }' 00:23:49.617 16:41:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:49.617 16:41:20 -- common/autotest_common.sh@10 -- # set +x 00:23:50.185 16:41:21 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:50.185 16:41:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:50.185 16:41:21 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:50.185 16:41:21 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:50.185 16:41:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:50.185 16:41:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.185 16:41:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.445 16:41:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:50.445 "name": "raid_bdev1", 00:23:50.445 "uuid": "ca498c9d-2d85-44db-b6d9-7418628e992d", 00:23:50.445 "strip_size_kb": 64, 00:23:50.445 "state": "online", 00:23:50.445 "raid_level": "raid5f", 00:23:50.445 "superblock": true, 00:23:50.445 "num_base_bdevs": 3, 00:23:50.445 "num_base_bdevs_discovered": 3, 00:23:50.445 "num_base_bdevs_operational": 3, 00:23:50.445 "base_bdevs_list": [ 00:23:50.445 { 00:23:50.445 "name": "spare", 00:23:50.445 "uuid": "1d8fe2c1-d028-5d8d-be94-9c67e48c24e4", 00:23:50.445 "is_configured": true, 00:23:50.445 "data_offset": 2048, 00:23:50.445 "data_size": 63488 00:23:50.445 }, 00:23:50.445 { 00:23:50.445 "name": "BaseBdev2", 00:23:50.445 "uuid": "ae186342-7425-544f-ab49-8686d791cc1f", 00:23:50.445 "is_configured": true, 00:23:50.445 "data_offset": 2048, 00:23:50.445 "data_size": 63488 00:23:50.445 }, 00:23:50.445 { 00:23:50.445 "name": "BaseBdev3", 00:23:50.445 "uuid": "c8bd59b1-05c9-5d6c-93bd-1bac07d2ec98", 00:23:50.445 "is_configured": true, 00:23:50.445 "data_offset": 2048, 00:23:50.445 "data_size": 63488 00:23:50.445 } 00:23:50.445 ] 00:23:50.445 }' 00:23:50.445 16:41:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:50.445 16:41:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:50.445 16:41:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:50.445 16:41:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:50.445 16:41:21 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.445 16:41:21 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:50.705 16:41:22 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:50.705 16:41:22 -- bdev/bdev_raid.sh@709 -- # killprocess 139341 00:23:50.705 16:41:22 -- common/autotest_common.sh@926 -- # '[' -z 139341 ']' 00:23:50.705 16:41:22 -- common/autotest_common.sh@930 -- # kill -0 139341 00:23:50.705 16:41:22 -- common/autotest_common.sh@931 -- # uname 00:23:50.705 16:41:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:50.705 16:41:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139341 00:23:50.705 16:41:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:50.705 16:41:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:50.705 16:41:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139341' 00:23:50.705 killing process with pid 139341 00:23:50.705 16:41:22 -- common/autotest_common.sh@945 -- # kill 139341 00:23:50.705 Received shutdown signal, test time was about 60.000000 seconds 00:23:50.705 00:23:50.705 Latency(us) 00:23:50.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.705 =================================================================================================================== 00:23:50.705 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:50.705 16:41:22 -- common/autotest_common.sh@950 -- # wait 139341 00:23:50.705 [2024-07-13 16:41:22.038238] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:50.705 [2024-07-13 16:41:22.038462] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:50.705 [2024-07-13 16:41:22.038695] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:50.705 [2024-07-13 16:41:22.038735] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:23:50.705 [2024-07-13 16:41:22.114708] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:51.273 00:23:51.273 real 0m22.982s 00:23:51.273 user 0m35.067s 00:23:51.273 sys 0m3.927s 00:23:51.273 16:41:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:51.273 16:41:22 -- common/autotest_common.sh@10 -- # set +x 00:23:51.273 ************************************ 00:23:51.273 END TEST raid5f_rebuild_test_sb 00:23:51.273 ************************************ 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:23:51.273 16:41:22 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:51.273 16:41:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:51.273 16:41:22 -- common/autotest_common.sh@10 -- # set +x 00:23:51.273 ************************************ 00:23:51.273 START TEST raid5f_state_function_test 00:23:51.273 ************************************ 00:23:51.273 16:41:22 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@226 -- # raid_pid=139967 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 139967' 00:23:51.273 Process raid pid: 139967 00:23:51.273 16:41:22 -- bdev/bdev_raid.sh@228 -- # waitforlisten 139967 /var/tmp/spdk-raid.sock 00:23:51.273 16:41:22 -- common/autotest_common.sh@819 -- # '[' -z 139967 ']' 00:23:51.273 16:41:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:51.273 16:41:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:51.273 16:41:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:51.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:51.273 16:41:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:51.273 16:41:22 -- common/autotest_common.sh@10 -- # set +x 00:23:51.273 [2024-07-13 16:41:22.665891] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:23:51.273 [2024-07-13 16:41:22.666368] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.532 [2024-07-13 16:41:22.816743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.532 [2024-07-13 16:41:22.906349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.532 [2024-07-13 16:41:22.987008] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:52.467 16:41:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:52.467 16:41:23 -- common/autotest_common.sh@852 -- # return 0 00:23:52.467 16:41:23 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:52.467 [2024-07-13 16:41:23.805201] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:52.467 [2024-07-13 16:41:23.805602] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:52.467 [2024-07-13 16:41:23.805701] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:52.467 [2024-07-13 16:41:23.805762] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:52.467 [2024-07-13 16:41:23.805849] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:52.467 [2024-07-13 16:41:23.805935] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:52.467 [2024-07-13 16:41:23.805965] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:52.468 [2024-07-13 16:41:23.806061] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:52.468 16:41:23 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:52.468 16:41:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:52.468 16:41:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:52.468 16:41:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:52.468 16:41:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:52.468 16:41:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:52.468 16:41:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:52.468 16:41:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:52.468 16:41:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:52.468 16:41:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:52.468 16:41:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:52.468 16:41:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.725 16:41:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:52.725 "name": "Existed_Raid", 00:23:52.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.725 "strip_size_kb": 64, 00:23:52.725 "state": "configuring", 00:23:52.725 "raid_level": "raid5f", 00:23:52.725 "superblock": false, 00:23:52.725 "num_base_bdevs": 4, 00:23:52.725 "num_base_bdevs_discovered": 0, 00:23:52.725 "num_base_bdevs_operational": 4, 00:23:52.725 "base_bdevs_list": [ 00:23:52.725 { 00:23:52.725 "name": "BaseBdev1", 00:23:52.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.725 "is_configured": false, 00:23:52.725 "data_offset": 0, 00:23:52.725 "data_size": 0 00:23:52.725 }, 00:23:52.726 { 00:23:52.726 "name": "BaseBdev2", 00:23:52.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.726 "is_configured": false, 00:23:52.726 "data_offset": 0, 00:23:52.726 "data_size": 0 00:23:52.726 }, 00:23:52.726 { 00:23:52.726 "name": "BaseBdev3", 00:23:52.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.726 "is_configured": false, 00:23:52.726 "data_offset": 0, 00:23:52.726 "data_size": 0 00:23:52.726 }, 00:23:52.726 { 00:23:52.726 "name": "BaseBdev4", 00:23:52.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.726 "is_configured": false, 00:23:52.726 "data_offset": 0, 00:23:52.726 "data_size": 0 00:23:52.726 } 00:23:52.726 ] 00:23:52.726 }' 00:23:52.726 16:41:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:52.726 16:41:24 -- common/autotest_common.sh@10 -- # set +x 00:23:53.293 16:41:24 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:53.552 [2024-07-13 16:41:24.925246] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:53.552 [2024-07-13 16:41:24.925534] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:23:53.552 16:41:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:53.811 [2024-07-13 16:41:25.177345] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:53.811 [2024-07-13 16:41:25.177557] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:53.811 [2024-07-13 16:41:25.177645] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:53.811 [2024-07-13 16:41:25.177705] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:53.811 [2024-07-13 16:41:25.177731] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:53.811 [2024-07-13 16:41:25.177769] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:53.811 [2024-07-13 16:41:25.177793] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:53.811 [2024-07-13 16:41:25.177904] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:53.811 16:41:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:54.070 [2024-07-13 16:41:25.461644] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:54.070 BaseBdev1 00:23:54.070 16:41:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:54.070 16:41:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:54.070 16:41:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:54.070 16:41:25 -- common/autotest_common.sh@889 -- # local i 00:23:54.070 16:41:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:54.070 16:41:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:54.070 16:41:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:54.328 16:41:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:54.587 [ 00:23:54.587 { 00:23:54.587 "name": "BaseBdev1", 00:23:54.587 "aliases": [ 00:23:54.587 "19a36bd3-4cd2-46d5-948f-11933c5688fd" 00:23:54.587 ], 00:23:54.587 "product_name": "Malloc disk", 00:23:54.587 "block_size": 512, 00:23:54.587 "num_blocks": 65536, 00:23:54.587 "uuid": "19a36bd3-4cd2-46d5-948f-11933c5688fd", 00:23:54.587 "assigned_rate_limits": { 00:23:54.587 "rw_ios_per_sec": 0, 00:23:54.587 "rw_mbytes_per_sec": 0, 00:23:54.587 "r_mbytes_per_sec": 0, 00:23:54.587 "w_mbytes_per_sec": 0 00:23:54.587 }, 00:23:54.587 "claimed": true, 00:23:54.587 "claim_type": "exclusive_write", 00:23:54.587 "zoned": false, 00:23:54.587 "supported_io_types": { 00:23:54.587 "read": true, 00:23:54.587 "write": true, 00:23:54.587 "unmap": true, 00:23:54.587 "write_zeroes": true, 00:23:54.587 "flush": true, 00:23:54.587 "reset": true, 00:23:54.587 "compare": false, 00:23:54.587 "compare_and_write": false, 00:23:54.587 "abort": true, 00:23:54.587 "nvme_admin": false, 00:23:54.587 "nvme_io": false 00:23:54.587 }, 00:23:54.587 "memory_domains": [ 00:23:54.587 { 00:23:54.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.587 "dma_device_type": 2 00:23:54.587 } 00:23:54.587 ], 00:23:54.587 "driver_specific": {} 00:23:54.587 } 00:23:54.587 ] 00:23:54.587 16:41:25 -- common/autotest_common.sh@895 -- # return 0 00:23:54.587 16:41:25 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:54.587 16:41:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:54.587 16:41:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:54.587 16:41:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:54.587 16:41:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:54.587 16:41:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:54.587 16:41:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:54.587 16:41:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:54.587 16:41:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:54.587 16:41:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:54.587 16:41:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.587 16:41:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:54.846 16:41:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:54.846 "name": "Existed_Raid", 00:23:54.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.846 "strip_size_kb": 64, 00:23:54.846 "state": "configuring", 00:23:54.846 "raid_level": "raid5f", 00:23:54.846 "superblock": false, 00:23:54.846 "num_base_bdevs": 4, 00:23:54.846 "num_base_bdevs_discovered": 1, 00:23:54.846 "num_base_bdevs_operational": 4, 00:23:54.846 "base_bdevs_list": [ 00:23:54.846 { 00:23:54.846 "name": "BaseBdev1", 00:23:54.846 "uuid": "19a36bd3-4cd2-46d5-948f-11933c5688fd", 00:23:54.846 "is_configured": true, 00:23:54.846 "data_offset": 0, 00:23:54.846 "data_size": 65536 00:23:54.846 }, 00:23:54.846 { 00:23:54.846 "name": "BaseBdev2", 00:23:54.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.846 "is_configured": false, 00:23:54.846 "data_offset": 0, 00:23:54.846 "data_size": 0 00:23:54.846 }, 00:23:54.846 { 00:23:54.846 "name": "BaseBdev3", 00:23:54.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.846 "is_configured": false, 00:23:54.846 "data_offset": 0, 00:23:54.846 "data_size": 0 00:23:54.846 }, 00:23:54.846 { 00:23:54.846 "name": "BaseBdev4", 00:23:54.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.846 "is_configured": false, 00:23:54.846 "data_offset": 0, 00:23:54.846 "data_size": 0 00:23:54.846 } 00:23:54.846 ] 00:23:54.846 }' 00:23:54.846 16:41:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:54.846 16:41:26 -- common/autotest_common.sh@10 -- # set +x 00:23:55.413 16:41:26 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:55.413 [2024-07-13 16:41:26.857981] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:55.413 [2024-07-13 16:41:26.858327] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:23:55.413 16:41:26 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:23:55.413 16:41:26 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:55.671 [2024-07-13 16:41:27.122152] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:55.671 [2024-07-13 16:41:27.125092] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:55.671 [2024-07-13 16:41:27.125348] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:55.671 [2024-07-13 16:41:27.125445] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:55.671 [2024-07-13 16:41:27.125528] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:55.671 [2024-07-13 16:41:27.125619] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:55.671 [2024-07-13 16:41:27.125675] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:55.930 "name": "Existed_Raid", 00:23:55.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.930 "strip_size_kb": 64, 00:23:55.930 "state": "configuring", 00:23:55.930 "raid_level": "raid5f", 00:23:55.930 "superblock": false, 00:23:55.930 "num_base_bdevs": 4, 00:23:55.930 "num_base_bdevs_discovered": 1, 00:23:55.930 "num_base_bdevs_operational": 4, 00:23:55.930 "base_bdevs_list": [ 00:23:55.930 { 00:23:55.930 "name": "BaseBdev1", 00:23:55.930 "uuid": "19a36bd3-4cd2-46d5-948f-11933c5688fd", 00:23:55.930 "is_configured": true, 00:23:55.930 "data_offset": 0, 00:23:55.930 "data_size": 65536 00:23:55.930 }, 00:23:55.930 { 00:23:55.930 "name": "BaseBdev2", 00:23:55.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.930 "is_configured": false, 00:23:55.930 "data_offset": 0, 00:23:55.930 "data_size": 0 00:23:55.930 }, 00:23:55.930 { 00:23:55.930 "name": "BaseBdev3", 00:23:55.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.930 "is_configured": false, 00:23:55.930 "data_offset": 0, 00:23:55.930 "data_size": 0 00:23:55.930 }, 00:23:55.930 { 00:23:55.930 "name": "BaseBdev4", 00:23:55.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.930 "is_configured": false, 00:23:55.930 "data_offset": 0, 00:23:55.930 "data_size": 0 00:23:55.930 } 00:23:55.930 ] 00:23:55.930 }' 00:23:55.930 16:41:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:55.930 16:41:27 -- common/autotest_common.sh@10 -- # set +x 00:23:56.497 16:41:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:56.755 [2024-07-13 16:41:28.209919] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:56.755 BaseBdev2 00:23:57.016 16:41:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:57.016 16:41:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:57.016 16:41:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:57.016 16:41:28 -- common/autotest_common.sh@889 -- # local i 00:23:57.016 16:41:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:57.016 16:41:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:57.016 16:41:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:57.016 16:41:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:57.275 [ 00:23:57.275 { 00:23:57.275 "name": "BaseBdev2", 00:23:57.275 "aliases": [ 00:23:57.275 "fa523252-b41e-425e-b4b1-9d1106e304d0" 00:23:57.275 ], 00:23:57.275 "product_name": "Malloc disk", 00:23:57.275 "block_size": 512, 00:23:57.275 "num_blocks": 65536, 00:23:57.275 "uuid": "fa523252-b41e-425e-b4b1-9d1106e304d0", 00:23:57.275 "assigned_rate_limits": { 00:23:57.275 "rw_ios_per_sec": 0, 00:23:57.275 "rw_mbytes_per_sec": 0, 00:23:57.275 "r_mbytes_per_sec": 0, 00:23:57.275 "w_mbytes_per_sec": 0 00:23:57.275 }, 00:23:57.275 "claimed": true, 00:23:57.275 "claim_type": "exclusive_write", 00:23:57.275 "zoned": false, 00:23:57.275 "supported_io_types": { 00:23:57.275 "read": true, 00:23:57.275 "write": true, 00:23:57.275 "unmap": true, 00:23:57.275 "write_zeroes": true, 00:23:57.275 "flush": true, 00:23:57.275 "reset": true, 00:23:57.275 "compare": false, 00:23:57.275 "compare_and_write": false, 00:23:57.275 "abort": true, 00:23:57.275 "nvme_admin": false, 00:23:57.275 "nvme_io": false 00:23:57.275 }, 00:23:57.275 "memory_domains": [ 00:23:57.275 { 00:23:57.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.275 "dma_device_type": 2 00:23:57.275 } 00:23:57.275 ], 00:23:57.275 "driver_specific": {} 00:23:57.275 } 00:23:57.275 ] 00:23:57.275 16:41:28 -- common/autotest_common.sh@895 -- # return 0 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.275 16:41:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:57.533 16:41:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:57.533 "name": "Existed_Raid", 00:23:57.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.533 "strip_size_kb": 64, 00:23:57.533 "state": "configuring", 00:23:57.533 "raid_level": "raid5f", 00:23:57.533 "superblock": false, 00:23:57.533 "num_base_bdevs": 4, 00:23:57.533 "num_base_bdevs_discovered": 2, 00:23:57.533 "num_base_bdevs_operational": 4, 00:23:57.533 "base_bdevs_list": [ 00:23:57.533 { 00:23:57.533 "name": "BaseBdev1", 00:23:57.533 "uuid": "19a36bd3-4cd2-46d5-948f-11933c5688fd", 00:23:57.533 "is_configured": true, 00:23:57.533 "data_offset": 0, 00:23:57.533 "data_size": 65536 00:23:57.533 }, 00:23:57.533 { 00:23:57.533 "name": "BaseBdev2", 00:23:57.533 "uuid": "fa523252-b41e-425e-b4b1-9d1106e304d0", 00:23:57.533 "is_configured": true, 00:23:57.533 "data_offset": 0, 00:23:57.533 "data_size": 65536 00:23:57.533 }, 00:23:57.533 { 00:23:57.533 "name": "BaseBdev3", 00:23:57.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.533 "is_configured": false, 00:23:57.533 "data_offset": 0, 00:23:57.533 "data_size": 0 00:23:57.533 }, 00:23:57.533 { 00:23:57.533 "name": "BaseBdev4", 00:23:57.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.533 "is_configured": false, 00:23:57.533 "data_offset": 0, 00:23:57.533 "data_size": 0 00:23:57.533 } 00:23:57.533 ] 00:23:57.533 }' 00:23:57.533 16:41:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:57.533 16:41:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.098 16:41:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:58.356 [2024-07-13 16:41:29.670142] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:58.356 BaseBdev3 00:23:58.356 16:41:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:58.356 16:41:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:58.356 16:41:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:58.356 16:41:29 -- common/autotest_common.sh@889 -- # local i 00:23:58.356 16:41:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:58.356 16:41:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:58.356 16:41:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:58.614 16:41:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:58.874 [ 00:23:58.874 { 00:23:58.874 "name": "BaseBdev3", 00:23:58.874 "aliases": [ 00:23:58.874 "3e4e871a-2708-4119-a25b-72dae7e3980e" 00:23:58.874 ], 00:23:58.874 "product_name": "Malloc disk", 00:23:58.874 "block_size": 512, 00:23:58.874 "num_blocks": 65536, 00:23:58.874 "uuid": "3e4e871a-2708-4119-a25b-72dae7e3980e", 00:23:58.874 "assigned_rate_limits": { 00:23:58.874 "rw_ios_per_sec": 0, 00:23:58.874 "rw_mbytes_per_sec": 0, 00:23:58.874 "r_mbytes_per_sec": 0, 00:23:58.874 "w_mbytes_per_sec": 0 00:23:58.874 }, 00:23:58.874 "claimed": true, 00:23:58.874 "claim_type": "exclusive_write", 00:23:58.874 "zoned": false, 00:23:58.874 "supported_io_types": { 00:23:58.874 "read": true, 00:23:58.874 "write": true, 00:23:58.874 "unmap": true, 00:23:58.874 "write_zeroes": true, 00:23:58.874 "flush": true, 00:23:58.874 "reset": true, 00:23:58.874 "compare": false, 00:23:58.874 "compare_and_write": false, 00:23:58.874 "abort": true, 00:23:58.874 "nvme_admin": false, 00:23:58.874 "nvme_io": false 00:23:58.874 }, 00:23:58.874 "memory_domains": [ 00:23:58.874 { 00:23:58.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.874 "dma_device_type": 2 00:23:58.874 } 00:23:58.874 ], 00:23:58.874 "driver_specific": {} 00:23:58.874 } 00:23:58.874 ] 00:23:58.874 16:41:30 -- common/autotest_common.sh@895 -- # return 0 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.874 16:41:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:59.133 16:41:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:59.133 "name": "Existed_Raid", 00:23:59.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.133 "strip_size_kb": 64, 00:23:59.133 "state": "configuring", 00:23:59.133 "raid_level": "raid5f", 00:23:59.133 "superblock": false, 00:23:59.133 "num_base_bdevs": 4, 00:23:59.133 "num_base_bdevs_discovered": 3, 00:23:59.133 "num_base_bdevs_operational": 4, 00:23:59.133 "base_bdevs_list": [ 00:23:59.133 { 00:23:59.133 "name": "BaseBdev1", 00:23:59.133 "uuid": "19a36bd3-4cd2-46d5-948f-11933c5688fd", 00:23:59.133 "is_configured": true, 00:23:59.133 "data_offset": 0, 00:23:59.133 "data_size": 65536 00:23:59.133 }, 00:23:59.133 { 00:23:59.133 "name": "BaseBdev2", 00:23:59.133 "uuid": "fa523252-b41e-425e-b4b1-9d1106e304d0", 00:23:59.133 "is_configured": true, 00:23:59.133 "data_offset": 0, 00:23:59.133 "data_size": 65536 00:23:59.133 }, 00:23:59.133 { 00:23:59.133 "name": "BaseBdev3", 00:23:59.133 "uuid": "3e4e871a-2708-4119-a25b-72dae7e3980e", 00:23:59.133 "is_configured": true, 00:23:59.133 "data_offset": 0, 00:23:59.133 "data_size": 65536 00:23:59.133 }, 00:23:59.133 { 00:23:59.133 "name": "BaseBdev4", 00:23:59.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.133 "is_configured": false, 00:23:59.133 "data_offset": 0, 00:23:59.133 "data_size": 0 00:23:59.133 } 00:23:59.133 ] 00:23:59.133 }' 00:23:59.134 16:41:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:59.134 16:41:30 -- common/autotest_common.sh@10 -- # set +x 00:23:59.701 16:41:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:59.960 [2024-07-13 16:41:31.304089] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:59.960 [2024-07-13 16:41:31.304185] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:23:59.960 [2024-07-13 16:41:31.304195] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:59.960 [2024-07-13 16:41:31.304404] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:23:59.960 [2024-07-13 16:41:31.305354] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:23:59.960 [2024-07-13 16:41:31.305376] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:23:59.960 [2024-07-13 16:41:31.305648] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.960 BaseBdev4 00:23:59.960 16:41:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:23:59.960 16:41:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:23:59.960 16:41:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:59.960 16:41:31 -- common/autotest_common.sh@889 -- # local i 00:23:59.960 16:41:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:59.960 16:41:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:59.960 16:41:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:00.219 16:41:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:00.479 [ 00:24:00.479 { 00:24:00.479 "name": "BaseBdev4", 00:24:00.479 "aliases": [ 00:24:00.479 "acf42f43-f07c-40ce-90e5-e381185bcb25" 00:24:00.479 ], 00:24:00.479 "product_name": "Malloc disk", 00:24:00.479 "block_size": 512, 00:24:00.479 "num_blocks": 65536, 00:24:00.479 "uuid": "acf42f43-f07c-40ce-90e5-e381185bcb25", 00:24:00.479 "assigned_rate_limits": { 00:24:00.479 "rw_ios_per_sec": 0, 00:24:00.479 "rw_mbytes_per_sec": 0, 00:24:00.479 "r_mbytes_per_sec": 0, 00:24:00.479 "w_mbytes_per_sec": 0 00:24:00.479 }, 00:24:00.479 "claimed": true, 00:24:00.479 "claim_type": "exclusive_write", 00:24:00.479 "zoned": false, 00:24:00.479 "supported_io_types": { 00:24:00.479 "read": true, 00:24:00.479 "write": true, 00:24:00.479 "unmap": true, 00:24:00.479 "write_zeroes": true, 00:24:00.479 "flush": true, 00:24:00.479 "reset": true, 00:24:00.479 "compare": false, 00:24:00.479 "compare_and_write": false, 00:24:00.479 "abort": true, 00:24:00.479 "nvme_admin": false, 00:24:00.479 "nvme_io": false 00:24:00.479 }, 00:24:00.479 "memory_domains": [ 00:24:00.479 { 00:24:00.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.479 "dma_device_type": 2 00:24:00.479 } 00:24:00.479 ], 00:24:00.479 "driver_specific": {} 00:24:00.479 } 00:24:00.479 ] 00:24:00.479 16:41:31 -- common/autotest_common.sh@895 -- # return 0 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.479 16:41:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:00.738 16:41:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:00.738 "name": "Existed_Raid", 00:24:00.738 "uuid": "ee314735-54b8-44d3-a938-3691cd6a2f40", 00:24:00.738 "strip_size_kb": 64, 00:24:00.738 "state": "online", 00:24:00.738 "raid_level": "raid5f", 00:24:00.738 "superblock": false, 00:24:00.738 "num_base_bdevs": 4, 00:24:00.738 "num_base_bdevs_discovered": 4, 00:24:00.738 "num_base_bdevs_operational": 4, 00:24:00.738 "base_bdevs_list": [ 00:24:00.738 { 00:24:00.738 "name": "BaseBdev1", 00:24:00.738 "uuid": "19a36bd3-4cd2-46d5-948f-11933c5688fd", 00:24:00.738 "is_configured": true, 00:24:00.738 "data_offset": 0, 00:24:00.738 "data_size": 65536 00:24:00.738 }, 00:24:00.738 { 00:24:00.738 "name": "BaseBdev2", 00:24:00.738 "uuid": "fa523252-b41e-425e-b4b1-9d1106e304d0", 00:24:00.738 "is_configured": true, 00:24:00.738 "data_offset": 0, 00:24:00.738 "data_size": 65536 00:24:00.738 }, 00:24:00.738 { 00:24:00.738 "name": "BaseBdev3", 00:24:00.738 "uuid": "3e4e871a-2708-4119-a25b-72dae7e3980e", 00:24:00.738 "is_configured": true, 00:24:00.738 "data_offset": 0, 00:24:00.738 "data_size": 65536 00:24:00.738 }, 00:24:00.738 { 00:24:00.738 "name": "BaseBdev4", 00:24:00.738 "uuid": "acf42f43-f07c-40ce-90e5-e381185bcb25", 00:24:00.738 "is_configured": true, 00:24:00.738 "data_offset": 0, 00:24:00.738 "data_size": 65536 00:24:00.738 } 00:24:00.738 ] 00:24:00.738 }' 00:24:00.738 16:41:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:00.738 16:41:32 -- common/autotest_common.sh@10 -- # set +x 00:24:01.306 16:41:32 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:01.564 [2024-07-13 16:41:32.845357] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:01.564 16:41:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.823 16:41:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:01.823 "name": "Existed_Raid", 00:24:01.823 "uuid": "ee314735-54b8-44d3-a938-3691cd6a2f40", 00:24:01.823 "strip_size_kb": 64, 00:24:01.823 "state": "online", 00:24:01.823 "raid_level": "raid5f", 00:24:01.823 "superblock": false, 00:24:01.823 "num_base_bdevs": 4, 00:24:01.823 "num_base_bdevs_discovered": 3, 00:24:01.823 "num_base_bdevs_operational": 3, 00:24:01.823 "base_bdevs_list": [ 00:24:01.823 { 00:24:01.823 "name": null, 00:24:01.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.823 "is_configured": false, 00:24:01.823 "data_offset": 0, 00:24:01.823 "data_size": 65536 00:24:01.823 }, 00:24:01.823 { 00:24:01.823 "name": "BaseBdev2", 00:24:01.823 "uuid": "fa523252-b41e-425e-b4b1-9d1106e304d0", 00:24:01.823 "is_configured": true, 00:24:01.823 "data_offset": 0, 00:24:01.823 "data_size": 65536 00:24:01.823 }, 00:24:01.823 { 00:24:01.823 "name": "BaseBdev3", 00:24:01.823 "uuid": "3e4e871a-2708-4119-a25b-72dae7e3980e", 00:24:01.823 "is_configured": true, 00:24:01.823 "data_offset": 0, 00:24:01.823 "data_size": 65536 00:24:01.823 }, 00:24:01.823 { 00:24:01.823 "name": "BaseBdev4", 00:24:01.823 "uuid": "acf42f43-f07c-40ce-90e5-e381185bcb25", 00:24:01.823 "is_configured": true, 00:24:01.823 "data_offset": 0, 00:24:01.823 "data_size": 65536 00:24:01.823 } 00:24:01.823 ] 00:24:01.823 }' 00:24:01.823 16:41:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:01.823 16:41:33 -- common/autotest_common.sh@10 -- # set +x 00:24:02.390 16:41:33 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:02.390 16:41:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:02.390 16:41:33 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.390 16:41:33 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:02.649 16:41:33 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:02.649 16:41:33 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:02.649 16:41:33 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:02.908 [2024-07-13 16:41:34.195429] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:02.908 [2024-07-13 16:41:34.195476] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:02.908 [2024-07-13 16:41:34.195579] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:02.908 16:41:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:02.908 16:41:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:02.908 16:41:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.908 16:41:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:03.166 16:41:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:03.166 16:41:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:03.166 16:41:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:03.166 [2024-07-13 16:41:34.588572] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:03.166 16:41:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:03.166 16:41:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:03.166 16:41:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.166 16:41:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:03.424 16:41:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:03.424 16:41:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:03.424 16:41:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:03.683 [2024-07-13 16:41:35.070285] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:03.683 [2024-07-13 16:41:35.070368] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:24:03.683 16:41:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:03.683 16:41:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:03.683 16:41:35 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.683 16:41:35 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:03.942 16:41:35 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:03.942 16:41:35 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:03.942 16:41:35 -- bdev/bdev_raid.sh@287 -- # killprocess 139967 00:24:03.942 16:41:35 -- common/autotest_common.sh@926 -- # '[' -z 139967 ']' 00:24:03.942 16:41:35 -- common/autotest_common.sh@930 -- # kill -0 139967 00:24:03.942 16:41:35 -- common/autotest_common.sh@931 -- # uname 00:24:03.942 16:41:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:03.942 16:41:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139967 00:24:03.942 16:41:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:03.942 killing process with pid 139967 00:24:03.942 16:41:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:03.942 16:41:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139967' 00:24:03.942 16:41:35 -- common/autotest_common.sh@945 -- # kill 139967 00:24:03.942 [2024-07-13 16:41:35.362687] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:03.942 16:41:35 -- common/autotest_common.sh@950 -- # wait 139967 00:24:03.942 [2024-07-13 16:41:35.362786] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:04.510 00:24:04.510 real 0m13.155s 00:24:04.510 user 0m23.345s 00:24:04.510 sys 0m2.313s 00:24:04.510 ************************************ 00:24:04.510 END TEST raid5f_state_function_test 00:24:04.510 ************************************ 00:24:04.510 16:41:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.510 16:41:35 -- common/autotest_common.sh@10 -- # set +x 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:24:04.510 16:41:35 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:24:04.510 16:41:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:04.510 16:41:35 -- common/autotest_common.sh@10 -- # set +x 00:24:04.510 ************************************ 00:24:04.510 START TEST raid5f_state_function_test_sb 00:24:04.510 ************************************ 00:24:04.510 16:41:35 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@226 -- # raid_pid=140379 00:24:04.510 Process raid pid: 140379 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 140379' 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@228 -- # waitforlisten 140379 /var/tmp/spdk-raid.sock 00:24:04.510 16:41:35 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:04.510 16:41:35 -- common/autotest_common.sh@819 -- # '[' -z 140379 ']' 00:24:04.510 16:41:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:04.510 16:41:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:04.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:04.510 16:41:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:04.510 16:41:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:04.510 16:41:35 -- common/autotest_common.sh@10 -- # set +x 00:24:04.510 [2024-07-13 16:41:35.888091] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:04.510 [2024-07-13 16:41:35.888299] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.769 [2024-07-13 16:41:36.040660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.769 [2024-07-13 16:41:36.134064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.769 [2024-07-13 16:41:36.219224] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:05.339 16:41:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:05.339 16:41:36 -- common/autotest_common.sh@852 -- # return 0 00:24:05.339 16:41:36 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:05.599 [2024-07-13 16:41:36.975340] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:05.599 [2024-07-13 16:41:36.975438] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:05.599 [2024-07-13 16:41:36.975451] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:05.599 [2024-07-13 16:41:36.975473] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:05.599 [2024-07-13 16:41:36.975480] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:05.599 [2024-07-13 16:41:36.975527] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:05.599 [2024-07-13 16:41:36.975535] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:05.599 [2024-07-13 16:41:36.975566] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:05.599 16:41:36 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:05.599 16:41:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:05.599 16:41:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:05.599 16:41:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:05.599 16:41:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:05.599 16:41:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:05.599 16:41:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:05.599 16:41:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:05.599 16:41:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:05.599 16:41:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:05.599 16:41:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.599 16:41:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:05.858 16:41:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:05.858 "name": "Existed_Raid", 00:24:05.858 "uuid": "f07e09e7-5eb2-41ac-bf8a-605db49ca4b7", 00:24:05.858 "strip_size_kb": 64, 00:24:05.858 "state": "configuring", 00:24:05.858 "raid_level": "raid5f", 00:24:05.858 "superblock": true, 00:24:05.858 "num_base_bdevs": 4, 00:24:05.858 "num_base_bdevs_discovered": 0, 00:24:05.858 "num_base_bdevs_operational": 4, 00:24:05.858 "base_bdevs_list": [ 00:24:05.858 { 00:24:05.858 "name": "BaseBdev1", 00:24:05.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.858 "is_configured": false, 00:24:05.858 "data_offset": 0, 00:24:05.858 "data_size": 0 00:24:05.858 }, 00:24:05.858 { 00:24:05.858 "name": "BaseBdev2", 00:24:05.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.858 "is_configured": false, 00:24:05.858 "data_offset": 0, 00:24:05.858 "data_size": 0 00:24:05.858 }, 00:24:05.858 { 00:24:05.859 "name": "BaseBdev3", 00:24:05.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.859 "is_configured": false, 00:24:05.859 "data_offset": 0, 00:24:05.859 "data_size": 0 00:24:05.859 }, 00:24:05.859 { 00:24:05.859 "name": "BaseBdev4", 00:24:05.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.859 "is_configured": false, 00:24:05.859 "data_offset": 0, 00:24:05.859 "data_size": 0 00:24:05.859 } 00:24:05.859 ] 00:24:05.859 }' 00:24:05.859 16:41:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:05.859 16:41:37 -- common/autotest_common.sh@10 -- # set +x 00:24:06.427 16:41:37 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:06.687 [2024-07-13 16:41:38.059343] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:06.687 [2024-07-13 16:41:38.059408] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:24:06.687 16:41:38 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:06.946 [2024-07-13 16:41:38.351432] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:06.946 [2024-07-13 16:41:38.351515] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:06.946 [2024-07-13 16:41:38.351525] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:06.946 [2024-07-13 16:41:38.351552] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:06.946 [2024-07-13 16:41:38.351560] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:06.946 [2024-07-13 16:41:38.351577] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:06.946 [2024-07-13 16:41:38.351583] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:06.946 [2024-07-13 16:41:38.351609] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:06.946 16:41:38 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:07.204 [2024-07-13 16:41:38.611439] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:07.204 BaseBdev1 00:24:07.204 16:41:38 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:07.204 16:41:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:07.204 16:41:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:07.204 16:41:38 -- common/autotest_common.sh@889 -- # local i 00:24:07.204 16:41:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:07.204 16:41:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:07.204 16:41:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:07.463 16:41:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:07.722 [ 00:24:07.722 { 00:24:07.722 "name": "BaseBdev1", 00:24:07.722 "aliases": [ 00:24:07.722 "8c329002-9c46-4594-beb6-b369afc68457" 00:24:07.722 ], 00:24:07.722 "product_name": "Malloc disk", 00:24:07.722 "block_size": 512, 00:24:07.722 "num_blocks": 65536, 00:24:07.722 "uuid": "8c329002-9c46-4594-beb6-b369afc68457", 00:24:07.722 "assigned_rate_limits": { 00:24:07.722 "rw_ios_per_sec": 0, 00:24:07.722 "rw_mbytes_per_sec": 0, 00:24:07.722 "r_mbytes_per_sec": 0, 00:24:07.722 "w_mbytes_per_sec": 0 00:24:07.722 }, 00:24:07.722 "claimed": true, 00:24:07.722 "claim_type": "exclusive_write", 00:24:07.722 "zoned": false, 00:24:07.722 "supported_io_types": { 00:24:07.722 "read": true, 00:24:07.722 "write": true, 00:24:07.722 "unmap": true, 00:24:07.722 "write_zeroes": true, 00:24:07.722 "flush": true, 00:24:07.722 "reset": true, 00:24:07.722 "compare": false, 00:24:07.722 "compare_and_write": false, 00:24:07.722 "abort": true, 00:24:07.722 "nvme_admin": false, 00:24:07.722 "nvme_io": false 00:24:07.722 }, 00:24:07.722 "memory_domains": [ 00:24:07.722 { 00:24:07.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.722 "dma_device_type": 2 00:24:07.722 } 00:24:07.722 ], 00:24:07.722 "driver_specific": {} 00:24:07.722 } 00:24:07.722 ] 00:24:07.722 16:41:39 -- common/autotest_common.sh@895 -- # return 0 00:24:07.722 16:41:39 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:07.722 16:41:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:07.722 16:41:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:07.722 16:41:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:07.722 16:41:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:07.722 16:41:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:07.722 16:41:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:07.722 16:41:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:07.722 16:41:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:07.722 16:41:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:07.722 16:41:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.722 16:41:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:07.982 16:41:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:07.982 "name": "Existed_Raid", 00:24:07.982 "uuid": "38d91f34-ae39-4dfe-9e44-be59d59a218c", 00:24:07.982 "strip_size_kb": 64, 00:24:07.982 "state": "configuring", 00:24:07.982 "raid_level": "raid5f", 00:24:07.982 "superblock": true, 00:24:07.982 "num_base_bdevs": 4, 00:24:07.982 "num_base_bdevs_discovered": 1, 00:24:07.982 "num_base_bdevs_operational": 4, 00:24:07.982 "base_bdevs_list": [ 00:24:07.982 { 00:24:07.982 "name": "BaseBdev1", 00:24:07.982 "uuid": "8c329002-9c46-4594-beb6-b369afc68457", 00:24:07.982 "is_configured": true, 00:24:07.982 "data_offset": 2048, 00:24:07.982 "data_size": 63488 00:24:07.982 }, 00:24:07.982 { 00:24:07.982 "name": "BaseBdev2", 00:24:07.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.982 "is_configured": false, 00:24:07.982 "data_offset": 0, 00:24:07.982 "data_size": 0 00:24:07.982 }, 00:24:07.982 { 00:24:07.982 "name": "BaseBdev3", 00:24:07.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.982 "is_configured": false, 00:24:07.982 "data_offset": 0, 00:24:07.982 "data_size": 0 00:24:07.982 }, 00:24:07.982 { 00:24:07.982 "name": "BaseBdev4", 00:24:07.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.982 "is_configured": false, 00:24:07.982 "data_offset": 0, 00:24:07.982 "data_size": 0 00:24:07.982 } 00:24:07.982 ] 00:24:07.982 }' 00:24:07.982 16:41:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:07.982 16:41:39 -- common/autotest_common.sh@10 -- # set +x 00:24:08.241 16:41:39 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:08.520 [2024-07-13 16:41:39.935720] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:08.520 [2024-07-13 16:41:39.935799] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:24:08.520 16:41:39 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:24:08.520 16:41:39 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:08.836 16:41:40 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:09.095 BaseBdev1 00:24:09.095 16:41:40 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:24:09.095 16:41:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:09.095 16:41:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:09.095 16:41:40 -- common/autotest_common.sh@889 -- # local i 00:24:09.095 16:41:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:09.095 16:41:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:09.095 16:41:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:09.353 16:41:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:09.353 [ 00:24:09.353 { 00:24:09.353 "name": "BaseBdev1", 00:24:09.353 "aliases": [ 00:24:09.353 "4c420872-15c8-4347-9134-91948f07b384" 00:24:09.353 ], 00:24:09.353 "product_name": "Malloc disk", 00:24:09.353 "block_size": 512, 00:24:09.353 "num_blocks": 65536, 00:24:09.353 "uuid": "4c420872-15c8-4347-9134-91948f07b384", 00:24:09.353 "assigned_rate_limits": { 00:24:09.353 "rw_ios_per_sec": 0, 00:24:09.353 "rw_mbytes_per_sec": 0, 00:24:09.353 "r_mbytes_per_sec": 0, 00:24:09.353 "w_mbytes_per_sec": 0 00:24:09.353 }, 00:24:09.353 "claimed": false, 00:24:09.353 "zoned": false, 00:24:09.353 "supported_io_types": { 00:24:09.353 "read": true, 00:24:09.353 "write": true, 00:24:09.353 "unmap": true, 00:24:09.353 "write_zeroes": true, 00:24:09.353 "flush": true, 00:24:09.353 "reset": true, 00:24:09.353 "compare": false, 00:24:09.353 "compare_and_write": false, 00:24:09.353 "abort": true, 00:24:09.353 "nvme_admin": false, 00:24:09.353 "nvme_io": false 00:24:09.353 }, 00:24:09.353 "memory_domains": [ 00:24:09.353 { 00:24:09.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.353 "dma_device_type": 2 00:24:09.353 } 00:24:09.353 ], 00:24:09.353 "driver_specific": {} 00:24:09.354 } 00:24:09.354 ] 00:24:09.354 16:41:40 -- common/autotest_common.sh@895 -- # return 0 00:24:09.354 16:41:40 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:09.612 [2024-07-13 16:41:41.048352] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:09.612 [2024-07-13 16:41:41.050799] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:09.612 [2024-07-13 16:41:41.050894] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:09.612 [2024-07-13 16:41:41.050904] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:09.612 [2024-07-13 16:41:41.050929] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:09.612 [2024-07-13 16:41:41.050937] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:09.612 [2024-07-13 16:41:41.050956] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:09.612 16:41:41 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:09.612 16:41:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:09.612 16:41:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:09.612 16:41:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:09.612 16:41:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:09.612 16:41:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:09.613 16:41:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:09.613 16:41:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:09.613 16:41:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:09.613 16:41:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:09.613 16:41:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:09.613 16:41:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:09.613 16:41:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:09.613 16:41:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.180 16:41:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:10.180 "name": "Existed_Raid", 00:24:10.180 "uuid": "d82e6d5e-f2d1-4599-b0e2-e9a3eca3c035", 00:24:10.180 "strip_size_kb": 64, 00:24:10.180 "state": "configuring", 00:24:10.180 "raid_level": "raid5f", 00:24:10.180 "superblock": true, 00:24:10.180 "num_base_bdevs": 4, 00:24:10.180 "num_base_bdevs_discovered": 1, 00:24:10.180 "num_base_bdevs_operational": 4, 00:24:10.180 "base_bdevs_list": [ 00:24:10.180 { 00:24:10.180 "name": "BaseBdev1", 00:24:10.180 "uuid": "4c420872-15c8-4347-9134-91948f07b384", 00:24:10.180 "is_configured": true, 00:24:10.180 "data_offset": 2048, 00:24:10.180 "data_size": 63488 00:24:10.180 }, 00:24:10.180 { 00:24:10.180 "name": "BaseBdev2", 00:24:10.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.180 "is_configured": false, 00:24:10.180 "data_offset": 0, 00:24:10.180 "data_size": 0 00:24:10.180 }, 00:24:10.180 { 00:24:10.180 "name": "BaseBdev3", 00:24:10.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.180 "is_configured": false, 00:24:10.180 "data_offset": 0, 00:24:10.180 "data_size": 0 00:24:10.180 }, 00:24:10.180 { 00:24:10.180 "name": "BaseBdev4", 00:24:10.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.180 "is_configured": false, 00:24:10.180 "data_offset": 0, 00:24:10.180 "data_size": 0 00:24:10.180 } 00:24:10.180 ] 00:24:10.180 }' 00:24:10.180 16:41:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:10.180 16:41:41 -- common/autotest_common.sh@10 -- # set +x 00:24:10.747 16:41:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:10.747 [2024-07-13 16:41:42.137987] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:10.747 BaseBdev2 00:24:10.747 16:41:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:10.747 16:41:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:10.747 16:41:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:10.747 16:41:42 -- common/autotest_common.sh@889 -- # local i 00:24:10.747 16:41:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:10.747 16:41:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:10.747 16:41:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:11.006 16:41:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:11.264 [ 00:24:11.264 { 00:24:11.264 "name": "BaseBdev2", 00:24:11.264 "aliases": [ 00:24:11.264 "144dcb47-1453-48fc-83ee-e862dbe72265" 00:24:11.264 ], 00:24:11.264 "product_name": "Malloc disk", 00:24:11.264 "block_size": 512, 00:24:11.264 "num_blocks": 65536, 00:24:11.264 "uuid": "144dcb47-1453-48fc-83ee-e862dbe72265", 00:24:11.264 "assigned_rate_limits": { 00:24:11.264 "rw_ios_per_sec": 0, 00:24:11.264 "rw_mbytes_per_sec": 0, 00:24:11.264 "r_mbytes_per_sec": 0, 00:24:11.264 "w_mbytes_per_sec": 0 00:24:11.264 }, 00:24:11.264 "claimed": true, 00:24:11.264 "claim_type": "exclusive_write", 00:24:11.264 "zoned": false, 00:24:11.264 "supported_io_types": { 00:24:11.264 "read": true, 00:24:11.264 "write": true, 00:24:11.264 "unmap": true, 00:24:11.264 "write_zeroes": true, 00:24:11.264 "flush": true, 00:24:11.264 "reset": true, 00:24:11.264 "compare": false, 00:24:11.264 "compare_and_write": false, 00:24:11.264 "abort": true, 00:24:11.264 "nvme_admin": false, 00:24:11.264 "nvme_io": false 00:24:11.264 }, 00:24:11.264 "memory_domains": [ 00:24:11.264 { 00:24:11.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.264 "dma_device_type": 2 00:24:11.264 } 00:24:11.264 ], 00:24:11.264 "driver_specific": {} 00:24:11.264 } 00:24:11.264 ] 00:24:11.264 16:41:42 -- common/autotest_common.sh@895 -- # return 0 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:11.264 16:41:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:11.264 "name": "Existed_Raid", 00:24:11.264 "uuid": "d82e6d5e-f2d1-4599-b0e2-e9a3eca3c035", 00:24:11.264 "strip_size_kb": 64, 00:24:11.264 "state": "configuring", 00:24:11.264 "raid_level": "raid5f", 00:24:11.264 "superblock": true, 00:24:11.264 "num_base_bdevs": 4, 00:24:11.264 "num_base_bdevs_discovered": 2, 00:24:11.264 "num_base_bdevs_operational": 4, 00:24:11.264 "base_bdevs_list": [ 00:24:11.264 { 00:24:11.264 "name": "BaseBdev1", 00:24:11.264 "uuid": "4c420872-15c8-4347-9134-91948f07b384", 00:24:11.264 "is_configured": true, 00:24:11.264 "data_offset": 2048, 00:24:11.264 "data_size": 63488 00:24:11.264 }, 00:24:11.264 { 00:24:11.265 "name": "BaseBdev2", 00:24:11.265 "uuid": "144dcb47-1453-48fc-83ee-e862dbe72265", 00:24:11.265 "is_configured": true, 00:24:11.265 "data_offset": 2048, 00:24:11.265 "data_size": 63488 00:24:11.265 }, 00:24:11.265 { 00:24:11.265 "name": "BaseBdev3", 00:24:11.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.265 "is_configured": false, 00:24:11.265 "data_offset": 0, 00:24:11.265 "data_size": 0 00:24:11.265 }, 00:24:11.265 { 00:24:11.265 "name": "BaseBdev4", 00:24:11.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.265 "is_configured": false, 00:24:11.265 "data_offset": 0, 00:24:11.265 "data_size": 0 00:24:11.265 } 00:24:11.265 ] 00:24:11.265 }' 00:24:11.265 16:41:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:11.265 16:41:42 -- common/autotest_common.sh@10 -- # set +x 00:24:11.834 16:41:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:12.093 [2024-07-13 16:41:43.507535] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:12.093 BaseBdev3 00:24:12.093 16:41:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:12.093 16:41:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:12.093 16:41:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:12.093 16:41:43 -- common/autotest_common.sh@889 -- # local i 00:24:12.093 16:41:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:12.093 16:41:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:12.093 16:41:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:12.351 16:41:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:12.609 [ 00:24:12.609 { 00:24:12.609 "name": "BaseBdev3", 00:24:12.609 "aliases": [ 00:24:12.609 "03d343c4-db3f-44aa-9045-740f845f81bd" 00:24:12.609 ], 00:24:12.609 "product_name": "Malloc disk", 00:24:12.609 "block_size": 512, 00:24:12.609 "num_blocks": 65536, 00:24:12.609 "uuid": "03d343c4-db3f-44aa-9045-740f845f81bd", 00:24:12.609 "assigned_rate_limits": { 00:24:12.609 "rw_ios_per_sec": 0, 00:24:12.609 "rw_mbytes_per_sec": 0, 00:24:12.609 "r_mbytes_per_sec": 0, 00:24:12.609 "w_mbytes_per_sec": 0 00:24:12.609 }, 00:24:12.609 "claimed": true, 00:24:12.609 "claim_type": "exclusive_write", 00:24:12.609 "zoned": false, 00:24:12.609 "supported_io_types": { 00:24:12.609 "read": true, 00:24:12.609 "write": true, 00:24:12.609 "unmap": true, 00:24:12.609 "write_zeroes": true, 00:24:12.609 "flush": true, 00:24:12.609 "reset": true, 00:24:12.609 "compare": false, 00:24:12.609 "compare_and_write": false, 00:24:12.609 "abort": true, 00:24:12.609 "nvme_admin": false, 00:24:12.609 "nvme_io": false 00:24:12.609 }, 00:24:12.609 "memory_domains": [ 00:24:12.609 { 00:24:12.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.609 "dma_device_type": 2 00:24:12.609 } 00:24:12.609 ], 00:24:12.609 "driver_specific": {} 00:24:12.609 } 00:24:12.609 ] 00:24:12.609 16:41:43 -- common/autotest_common.sh@895 -- # return 0 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.609 16:41:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.867 16:41:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:12.867 "name": "Existed_Raid", 00:24:12.867 "uuid": "d82e6d5e-f2d1-4599-b0e2-e9a3eca3c035", 00:24:12.867 "strip_size_kb": 64, 00:24:12.867 "state": "configuring", 00:24:12.867 "raid_level": "raid5f", 00:24:12.867 "superblock": true, 00:24:12.867 "num_base_bdevs": 4, 00:24:12.867 "num_base_bdevs_discovered": 3, 00:24:12.867 "num_base_bdevs_operational": 4, 00:24:12.867 "base_bdevs_list": [ 00:24:12.867 { 00:24:12.867 "name": "BaseBdev1", 00:24:12.867 "uuid": "4c420872-15c8-4347-9134-91948f07b384", 00:24:12.867 "is_configured": true, 00:24:12.867 "data_offset": 2048, 00:24:12.867 "data_size": 63488 00:24:12.867 }, 00:24:12.867 { 00:24:12.867 "name": "BaseBdev2", 00:24:12.867 "uuid": "144dcb47-1453-48fc-83ee-e862dbe72265", 00:24:12.867 "is_configured": true, 00:24:12.867 "data_offset": 2048, 00:24:12.867 "data_size": 63488 00:24:12.867 }, 00:24:12.867 { 00:24:12.867 "name": "BaseBdev3", 00:24:12.867 "uuid": "03d343c4-db3f-44aa-9045-740f845f81bd", 00:24:12.867 "is_configured": true, 00:24:12.867 "data_offset": 2048, 00:24:12.867 "data_size": 63488 00:24:12.867 }, 00:24:12.867 { 00:24:12.867 "name": "BaseBdev4", 00:24:12.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.867 "is_configured": false, 00:24:12.867 "data_offset": 0, 00:24:12.867 "data_size": 0 00:24:12.867 } 00:24:12.867 ] 00:24:12.867 }' 00:24:12.867 16:41:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:12.867 16:41:44 -- common/autotest_common.sh@10 -- # set +x 00:24:13.433 16:41:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:13.691 [2024-07-13 16:41:45.001598] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:13.691 [2024-07-13 16:41:45.001889] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:24:13.691 [2024-07-13 16:41:45.001903] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:13.691 [2024-07-13 16:41:45.002077] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:24:13.691 [2024-07-13 16:41:45.002970] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:24:13.691 [2024-07-13 16:41:45.002992] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:24:13.691 [2024-07-13 16:41:45.003144] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.691 BaseBdev4 00:24:13.691 16:41:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:13.691 16:41:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:24:13.691 16:41:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:13.691 16:41:45 -- common/autotest_common.sh@889 -- # local i 00:24:13.691 16:41:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:13.691 16:41:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:13.691 16:41:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:13.949 16:41:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:14.207 [ 00:24:14.207 { 00:24:14.207 "name": "BaseBdev4", 00:24:14.207 "aliases": [ 00:24:14.207 "0666f928-ceb7-4abe-947f-192855abf5f3" 00:24:14.207 ], 00:24:14.207 "product_name": "Malloc disk", 00:24:14.207 "block_size": 512, 00:24:14.207 "num_blocks": 65536, 00:24:14.207 "uuid": "0666f928-ceb7-4abe-947f-192855abf5f3", 00:24:14.207 "assigned_rate_limits": { 00:24:14.207 "rw_ios_per_sec": 0, 00:24:14.207 "rw_mbytes_per_sec": 0, 00:24:14.207 "r_mbytes_per_sec": 0, 00:24:14.207 "w_mbytes_per_sec": 0 00:24:14.207 }, 00:24:14.207 "claimed": true, 00:24:14.207 "claim_type": "exclusive_write", 00:24:14.207 "zoned": false, 00:24:14.207 "supported_io_types": { 00:24:14.207 "read": true, 00:24:14.207 "write": true, 00:24:14.207 "unmap": true, 00:24:14.207 "write_zeroes": true, 00:24:14.207 "flush": true, 00:24:14.207 "reset": true, 00:24:14.207 "compare": false, 00:24:14.207 "compare_and_write": false, 00:24:14.207 "abort": true, 00:24:14.207 "nvme_admin": false, 00:24:14.207 "nvme_io": false 00:24:14.207 }, 00:24:14.207 "memory_domains": [ 00:24:14.207 { 00:24:14.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:14.207 "dma_device_type": 2 00:24:14.207 } 00:24:14.207 ], 00:24:14.207 "driver_specific": {} 00:24:14.207 } 00:24:14.207 ] 00:24:14.207 16:41:45 -- common/autotest_common.sh@895 -- # return 0 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:14.207 "name": "Existed_Raid", 00:24:14.207 "uuid": "d82e6d5e-f2d1-4599-b0e2-e9a3eca3c035", 00:24:14.207 "strip_size_kb": 64, 00:24:14.207 "state": "online", 00:24:14.207 "raid_level": "raid5f", 00:24:14.207 "superblock": true, 00:24:14.207 "num_base_bdevs": 4, 00:24:14.207 "num_base_bdevs_discovered": 4, 00:24:14.207 "num_base_bdevs_operational": 4, 00:24:14.207 "base_bdevs_list": [ 00:24:14.207 { 00:24:14.207 "name": "BaseBdev1", 00:24:14.207 "uuid": "4c420872-15c8-4347-9134-91948f07b384", 00:24:14.207 "is_configured": true, 00:24:14.207 "data_offset": 2048, 00:24:14.207 "data_size": 63488 00:24:14.207 }, 00:24:14.207 { 00:24:14.207 "name": "BaseBdev2", 00:24:14.207 "uuid": "144dcb47-1453-48fc-83ee-e862dbe72265", 00:24:14.207 "is_configured": true, 00:24:14.207 "data_offset": 2048, 00:24:14.207 "data_size": 63488 00:24:14.207 }, 00:24:14.207 { 00:24:14.207 "name": "BaseBdev3", 00:24:14.207 "uuid": "03d343c4-db3f-44aa-9045-740f845f81bd", 00:24:14.207 "is_configured": true, 00:24:14.207 "data_offset": 2048, 00:24:14.207 "data_size": 63488 00:24:14.207 }, 00:24:14.207 { 00:24:14.207 "name": "BaseBdev4", 00:24:14.207 "uuid": "0666f928-ceb7-4abe-947f-192855abf5f3", 00:24:14.207 "is_configured": true, 00:24:14.207 "data_offset": 2048, 00:24:14.207 "data_size": 63488 00:24:14.207 } 00:24:14.207 ] 00:24:14.207 }' 00:24:14.207 16:41:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:14.207 16:41:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:15.144 [2024-07-13 16:41:46.492865] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.144 16:41:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:15.403 16:41:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:15.403 "name": "Existed_Raid", 00:24:15.403 "uuid": "d82e6d5e-f2d1-4599-b0e2-e9a3eca3c035", 00:24:15.403 "strip_size_kb": 64, 00:24:15.403 "state": "online", 00:24:15.403 "raid_level": "raid5f", 00:24:15.403 "superblock": true, 00:24:15.403 "num_base_bdevs": 4, 00:24:15.403 "num_base_bdevs_discovered": 3, 00:24:15.403 "num_base_bdevs_operational": 3, 00:24:15.403 "base_bdevs_list": [ 00:24:15.403 { 00:24:15.403 "name": null, 00:24:15.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.403 "is_configured": false, 00:24:15.403 "data_offset": 2048, 00:24:15.403 "data_size": 63488 00:24:15.403 }, 00:24:15.403 { 00:24:15.403 "name": "BaseBdev2", 00:24:15.403 "uuid": "144dcb47-1453-48fc-83ee-e862dbe72265", 00:24:15.403 "is_configured": true, 00:24:15.403 "data_offset": 2048, 00:24:15.403 "data_size": 63488 00:24:15.403 }, 00:24:15.403 { 00:24:15.403 "name": "BaseBdev3", 00:24:15.403 "uuid": "03d343c4-db3f-44aa-9045-740f845f81bd", 00:24:15.403 "is_configured": true, 00:24:15.403 "data_offset": 2048, 00:24:15.403 "data_size": 63488 00:24:15.403 }, 00:24:15.403 { 00:24:15.403 "name": "BaseBdev4", 00:24:15.403 "uuid": "0666f928-ceb7-4abe-947f-192855abf5f3", 00:24:15.403 "is_configured": true, 00:24:15.403 "data_offset": 2048, 00:24:15.403 "data_size": 63488 00:24:15.403 } 00:24:15.403 ] 00:24:15.403 }' 00:24:15.403 16:41:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:15.403 16:41:46 -- common/autotest_common.sh@10 -- # set +x 00:24:15.971 16:41:47 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:15.971 16:41:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:15.971 16:41:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.971 16:41:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:16.229 16:41:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:16.229 16:41:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:16.229 16:41:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:16.229 [2024-07-13 16:41:47.697185] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:16.229 [2024-07-13 16:41:47.697236] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:16.229 [2024-07-13 16:41:47.697326] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:16.487 16:41:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:16.487 16:41:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:16.488 16:41:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.488 16:41:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:16.746 16:41:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:16.746 16:41:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:16.746 16:41:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:17.004 [2024-07-13 16:41:48.238243] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:17.004 16:41:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:17.004 16:41:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:17.004 16:41:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.004 16:41:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:17.262 16:41:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:17.262 16:41:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:17.262 16:41:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:17.262 [2024-07-13 16:41:48.731668] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:17.262 [2024-07-13 16:41:48.731742] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:24:17.521 16:41:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:17.521 16:41:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:17.521 16:41:48 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.521 16:41:48 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:17.521 16:41:48 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:17.521 16:41:48 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:17.521 16:41:48 -- bdev/bdev_raid.sh@287 -- # killprocess 140379 00:24:17.521 16:41:48 -- common/autotest_common.sh@926 -- # '[' -z 140379 ']' 00:24:17.521 16:41:48 -- common/autotest_common.sh@930 -- # kill -0 140379 00:24:17.521 16:41:48 -- common/autotest_common.sh@931 -- # uname 00:24:17.521 16:41:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:17.521 16:41:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140379 00:24:17.521 16:41:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:17.521 16:41:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:17.521 16:41:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140379' 00:24:17.521 killing process with pid 140379 00:24:17.521 16:41:48 -- common/autotest_common.sh@945 -- # kill 140379 00:24:17.521 16:41:48 -- common/autotest_common.sh@950 -- # wait 140379 00:24:17.521 [2024-07-13 16:41:48.990170] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:17.521 [2024-07-13 16:41:48.990393] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:18.088 00:24:18.088 real 0m13.568s 00:24:18.088 user 0m23.908s 00:24:18.088 sys 0m2.556s 00:24:18.088 16:41:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.088 16:41:49 -- common/autotest_common.sh@10 -- # set +x 00:24:18.088 ************************************ 00:24:18.088 END TEST raid5f_state_function_test_sb 00:24:18.088 ************************************ 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:24:18.088 16:41:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:24:18.088 16:41:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:18.088 16:41:49 -- common/autotest_common.sh@10 -- # set +x 00:24:18.088 ************************************ 00:24:18.088 START TEST raid5f_superblock_test 00:24:18.088 ************************************ 00:24:18.088 16:41:49 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@357 -- # raid_pid=140821 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@358 -- # waitforlisten 140821 /var/tmp/spdk-raid.sock 00:24:18.088 16:41:49 -- common/autotest_common.sh@819 -- # '[' -z 140821 ']' 00:24:18.088 16:41:49 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:18.088 16:41:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:18.088 16:41:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:18.088 16:41:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:18.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:18.088 16:41:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:18.088 16:41:49 -- common/autotest_common.sh@10 -- # set +x 00:24:18.088 [2024-07-13 16:41:49.530524] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:18.088 [2024-07-13 16:41:49.530828] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140821 ] 00:24:18.347 [2024-07-13 16:41:49.692036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.347 [2024-07-13 16:41:49.778085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.606 [2024-07-13 16:41:49.862816] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:19.173 16:41:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:19.173 16:41:50 -- common/autotest_common.sh@852 -- # return 0 00:24:19.173 16:41:50 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:24:19.173 16:41:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:19.173 16:41:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:24:19.174 16:41:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:24:19.174 16:41:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:19.174 16:41:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:19.174 16:41:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:19.174 16:41:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:19.174 16:41:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:19.433 malloc1 00:24:19.433 16:41:50 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:19.692 [2024-07-13 16:41:50.921487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:19.692 [2024-07-13 16:41:50.921601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:19.692 [2024-07-13 16:41:50.921650] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:24:19.692 [2024-07-13 16:41:50.921717] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:19.692 [2024-07-13 16:41:50.924622] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:19.692 [2024-07-13 16:41:50.924686] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:19.692 pt1 00:24:19.692 16:41:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:19.692 16:41:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:19.692 16:41:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:24:19.692 16:41:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:24:19.692 16:41:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:19.692 16:41:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:19.692 16:41:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:19.692 16:41:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:19.692 16:41:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:19.951 malloc2 00:24:19.951 16:41:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:20.211 [2024-07-13 16:41:51.480867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:20.211 [2024-07-13 16:41:51.480957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.211 [2024-07-13 16:41:51.480999] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:24:20.211 [2024-07-13 16:41:51.481049] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.211 [2024-07-13 16:41:51.483759] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.211 [2024-07-13 16:41:51.483830] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:20.211 pt2 00:24:20.211 16:41:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:20.211 16:41:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:20.211 16:41:51 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:24:20.211 16:41:51 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:24:20.211 16:41:51 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:20.211 16:41:51 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:20.211 16:41:51 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:20.211 16:41:51 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:20.211 16:41:51 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:20.470 malloc3 00:24:20.470 16:41:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:20.729 [2024-07-13 16:41:51.956324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:20.729 [2024-07-13 16:41:51.956435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.729 [2024-07-13 16:41:51.956481] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:20.729 [2024-07-13 16:41:51.956526] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.729 [2024-07-13 16:41:51.959268] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.729 [2024-07-13 16:41:51.959325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:20.729 pt3 00:24:20.729 16:41:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:20.729 16:41:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:20.729 16:41:51 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:24:20.729 16:41:51 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:24:20.729 16:41:51 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:20.729 16:41:51 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:20.729 16:41:51 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:20.729 16:41:51 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:20.729 16:41:51 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:20.729 malloc4 00:24:20.729 16:41:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:20.986 [2024-07-13 16:41:52.339751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:20.986 [2024-07-13 16:41:52.339887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.986 [2024-07-13 16:41:52.339926] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:20.986 [2024-07-13 16:41:52.339972] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.986 [2024-07-13 16:41:52.342753] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.986 [2024-07-13 16:41:52.342804] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:20.986 pt4 00:24:20.986 16:41:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:20.986 16:41:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:20.986 16:41:52 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:21.244 [2024-07-13 16:41:52.551935] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:21.244 [2024-07-13 16:41:52.554374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:21.244 [2024-07-13 16:41:52.554439] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:21.245 [2024-07-13 16:41:52.554478] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:21.245 [2024-07-13 16:41:52.554712] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:24:21.245 [2024-07-13 16:41:52.554722] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:21.245 [2024-07-13 16:41:52.554892] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:21.245 [2024-07-13 16:41:52.555751] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:24:21.245 [2024-07-13 16:41:52.555771] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:24:21.245 [2024-07-13 16:41:52.555987] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:21.245 16:41:52 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:21.245 16:41:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:21.245 16:41:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:21.245 16:41:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:21.245 16:41:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:21.245 16:41:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:21.245 16:41:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:21.245 16:41:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:21.245 16:41:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:21.245 16:41:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:21.245 16:41:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.245 16:41:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.503 16:41:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:21.503 "name": "raid_bdev1", 00:24:21.503 "uuid": "1cc01b25-efdd-4745-99fb-a4b1bfae0fad", 00:24:21.503 "strip_size_kb": 64, 00:24:21.503 "state": "online", 00:24:21.503 "raid_level": "raid5f", 00:24:21.503 "superblock": true, 00:24:21.503 "num_base_bdevs": 4, 00:24:21.503 "num_base_bdevs_discovered": 4, 00:24:21.503 "num_base_bdevs_operational": 4, 00:24:21.503 "base_bdevs_list": [ 00:24:21.503 { 00:24:21.503 "name": "pt1", 00:24:21.503 "uuid": "11fbf0a3-eb6d-582e-9c2d-46aa4dfd0e96", 00:24:21.503 "is_configured": true, 00:24:21.503 "data_offset": 2048, 00:24:21.503 "data_size": 63488 00:24:21.503 }, 00:24:21.503 { 00:24:21.503 "name": "pt2", 00:24:21.503 "uuid": "30fc2bc6-f1d1-5cf3-b645-c9e6f8fd8499", 00:24:21.503 "is_configured": true, 00:24:21.503 "data_offset": 2048, 00:24:21.503 "data_size": 63488 00:24:21.503 }, 00:24:21.503 { 00:24:21.503 "name": "pt3", 00:24:21.503 "uuid": "6252dfc4-ee4f-588f-b3ac-4e9db4b5b6d2", 00:24:21.503 "is_configured": true, 00:24:21.503 "data_offset": 2048, 00:24:21.503 "data_size": 63488 00:24:21.503 }, 00:24:21.503 { 00:24:21.503 "name": "pt4", 00:24:21.503 "uuid": "d5adfad4-ad88-51b8-8a63-37e18d8b1b6c", 00:24:21.503 "is_configured": true, 00:24:21.503 "data_offset": 2048, 00:24:21.503 "data_size": 63488 00:24:21.503 } 00:24:21.503 ] 00:24:21.503 }' 00:24:21.503 16:41:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:21.503 16:41:52 -- common/autotest_common.sh@10 -- # set +x 00:24:22.068 16:41:53 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:24:22.068 16:41:53 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:22.327 [2024-07-13 16:41:53.556329] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:22.327 16:41:53 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1cc01b25-efdd-4745-99fb-a4b1bfae0fad 00:24:22.327 16:41:53 -- bdev/bdev_raid.sh@380 -- # '[' -z 1cc01b25-efdd-4745-99fb-a4b1bfae0fad ']' 00:24:22.327 16:41:53 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:22.584 [2024-07-13 16:41:53.816192] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:22.584 [2024-07-13 16:41:53.816233] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:22.584 [2024-07-13 16:41:53.816387] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:22.584 [2024-07-13 16:41:53.816494] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:22.584 [2024-07-13 16:41:53.816505] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:24:22.584 16:41:53 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.584 16:41:53 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:24:22.842 16:41:54 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:24:22.842 16:41:54 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:24:22.842 16:41:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:22.842 16:41:54 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:22.842 16:41:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:22.842 16:41:54 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:23.100 16:41:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:23.100 16:41:54 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:23.358 16:41:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:23.358 16:41:54 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:23.619 16:41:54 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:23.619 16:41:54 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:23.619 16:41:55 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:24:23.619 16:41:55 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:23.619 16:41:55 -- common/autotest_common.sh@640 -- # local es=0 00:24:23.619 16:41:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:23.619 16:41:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.619 16:41:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:23.619 16:41:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.619 16:41:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:23.619 16:41:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.619 16:41:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:23.619 16:41:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.619 16:41:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:23.619 16:41:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:23.898 [2024-07-13 16:41:55.236443] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:23.898 [2024-07-13 16:41:55.238856] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:23.898 [2024-07-13 16:41:55.238920] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:23.898 [2024-07-13 16:41:55.238950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:23.898 [2024-07-13 16:41:55.238999] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:24:23.898 [2024-07-13 16:41:55.239084] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:24:23.898 [2024-07-13 16:41:55.239112] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:24:23.898 [2024-07-13 16:41:55.239168] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:24:23.898 [2024-07-13 16:41:55.239211] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:23.898 [2024-07-13 16:41:55.239222] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:24:23.898 request: 00:24:23.898 { 00:24:23.898 "name": "raid_bdev1", 00:24:23.898 "raid_level": "raid5f", 00:24:23.898 "base_bdevs": [ 00:24:23.898 "malloc1", 00:24:23.898 "malloc2", 00:24:23.898 "malloc3", 00:24:23.898 "malloc4" 00:24:23.898 ], 00:24:23.898 "superblock": false, 00:24:23.898 "strip_size_kb": 64, 00:24:23.898 "method": "bdev_raid_create", 00:24:23.898 "req_id": 1 00:24:23.898 } 00:24:23.898 Got JSON-RPC error response 00:24:23.898 response: 00:24:23.898 { 00:24:23.898 "code": -17, 00:24:23.898 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:23.898 } 00:24:23.898 16:41:55 -- common/autotest_common.sh@643 -- # es=1 00:24:23.898 16:41:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:23.898 16:41:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:23.898 16:41:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:23.898 16:41:55 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:24:23.898 16:41:55 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.199 16:41:55 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:24:24.199 16:41:55 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:24:24.199 16:41:55 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:24.459 [2024-07-13 16:41:55.684431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:24.459 [2024-07-13 16:41:55.684530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:24.459 [2024-07-13 16:41:55.684574] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:24.459 [2024-07-13 16:41:55.684605] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:24.459 [2024-07-13 16:41:55.687292] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:24.459 [2024-07-13 16:41:55.687372] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:24.459 [2024-07-13 16:41:55.687470] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:24.459 [2024-07-13 16:41:55.687525] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:24.459 pt1 00:24:24.459 16:41:55 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:24.459 16:41:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:24.459 16:41:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:24.459 16:41:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:24.459 16:41:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:24.459 16:41:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:24.459 16:41:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:24.459 16:41:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:24.459 16:41:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:24.459 16:41:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:24.459 16:41:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.459 16:41:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.459 16:41:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:24.459 "name": "raid_bdev1", 00:24:24.459 "uuid": "1cc01b25-efdd-4745-99fb-a4b1bfae0fad", 00:24:24.459 "strip_size_kb": 64, 00:24:24.459 "state": "configuring", 00:24:24.459 "raid_level": "raid5f", 00:24:24.459 "superblock": true, 00:24:24.459 "num_base_bdevs": 4, 00:24:24.459 "num_base_bdevs_discovered": 1, 00:24:24.459 "num_base_bdevs_operational": 4, 00:24:24.459 "base_bdevs_list": [ 00:24:24.459 { 00:24:24.459 "name": "pt1", 00:24:24.459 "uuid": "11fbf0a3-eb6d-582e-9c2d-46aa4dfd0e96", 00:24:24.459 "is_configured": true, 00:24:24.459 "data_offset": 2048, 00:24:24.459 "data_size": 63488 00:24:24.459 }, 00:24:24.459 { 00:24:24.459 "name": null, 00:24:24.459 "uuid": "30fc2bc6-f1d1-5cf3-b645-c9e6f8fd8499", 00:24:24.459 "is_configured": false, 00:24:24.459 "data_offset": 2048, 00:24:24.459 "data_size": 63488 00:24:24.459 }, 00:24:24.459 { 00:24:24.459 "name": null, 00:24:24.459 "uuid": "6252dfc4-ee4f-588f-b3ac-4e9db4b5b6d2", 00:24:24.459 "is_configured": false, 00:24:24.459 "data_offset": 2048, 00:24:24.459 "data_size": 63488 00:24:24.459 }, 00:24:24.459 { 00:24:24.459 "name": null, 00:24:24.459 "uuid": "d5adfad4-ad88-51b8-8a63-37e18d8b1b6c", 00:24:24.459 "is_configured": false, 00:24:24.459 "data_offset": 2048, 00:24:24.460 "data_size": 63488 00:24:24.460 } 00:24:24.460 ] 00:24:24.460 }' 00:24:24.460 16:41:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:24.460 16:41:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.029 16:41:56 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:24:25.029 16:41:56 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:25.288 [2024-07-13 16:41:56.604642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:25.288 [2024-07-13 16:41:56.604768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.288 [2024-07-13 16:41:56.604818] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:25.288 [2024-07-13 16:41:56.604841] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.288 [2024-07-13 16:41:56.605341] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.288 [2024-07-13 16:41:56.605393] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:25.288 [2024-07-13 16:41:56.605495] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:25.288 [2024-07-13 16:41:56.605518] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:25.288 pt2 00:24:25.288 16:41:56 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:25.547 [2024-07-13 16:41:56.868700] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:25.547 16:41:56 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:25.547 16:41:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:25.547 16:41:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:25.547 16:41:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:25.547 16:41:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:25.547 16:41:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:25.547 16:41:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:25.547 16:41:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:25.547 16:41:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:25.547 16:41:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:25.547 16:41:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.547 16:41:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.806 16:41:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:25.806 "name": "raid_bdev1", 00:24:25.806 "uuid": "1cc01b25-efdd-4745-99fb-a4b1bfae0fad", 00:24:25.806 "strip_size_kb": 64, 00:24:25.806 "state": "configuring", 00:24:25.806 "raid_level": "raid5f", 00:24:25.806 "superblock": true, 00:24:25.806 "num_base_bdevs": 4, 00:24:25.806 "num_base_bdevs_discovered": 1, 00:24:25.806 "num_base_bdevs_operational": 4, 00:24:25.806 "base_bdevs_list": [ 00:24:25.806 { 00:24:25.806 "name": "pt1", 00:24:25.806 "uuid": "11fbf0a3-eb6d-582e-9c2d-46aa4dfd0e96", 00:24:25.806 "is_configured": true, 00:24:25.806 "data_offset": 2048, 00:24:25.806 "data_size": 63488 00:24:25.806 }, 00:24:25.806 { 00:24:25.806 "name": null, 00:24:25.806 "uuid": "30fc2bc6-f1d1-5cf3-b645-c9e6f8fd8499", 00:24:25.806 "is_configured": false, 00:24:25.806 "data_offset": 2048, 00:24:25.806 "data_size": 63488 00:24:25.806 }, 00:24:25.806 { 00:24:25.806 "name": null, 00:24:25.806 "uuid": "6252dfc4-ee4f-588f-b3ac-4e9db4b5b6d2", 00:24:25.806 "is_configured": false, 00:24:25.806 "data_offset": 2048, 00:24:25.806 "data_size": 63488 00:24:25.806 }, 00:24:25.806 { 00:24:25.806 "name": null, 00:24:25.806 "uuid": "d5adfad4-ad88-51b8-8a63-37e18d8b1b6c", 00:24:25.806 "is_configured": false, 00:24:25.806 "data_offset": 2048, 00:24:25.806 "data_size": 63488 00:24:25.806 } 00:24:25.806 ] 00:24:25.806 }' 00:24:25.806 16:41:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:25.806 16:41:57 -- common/autotest_common.sh@10 -- # set +x 00:24:26.374 16:41:57 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:24:26.374 16:41:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:26.374 16:41:57 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:26.633 [2024-07-13 16:41:57.916887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:26.633 [2024-07-13 16:41:57.916985] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.633 [2024-07-13 16:41:57.917030] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:24:26.633 [2024-07-13 16:41:57.917056] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.633 [2024-07-13 16:41:57.917555] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.633 [2024-07-13 16:41:57.917609] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:26.633 [2024-07-13 16:41:57.917695] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:26.633 [2024-07-13 16:41:57.917719] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:26.633 pt2 00:24:26.633 16:41:57 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:26.633 16:41:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:26.633 16:41:57 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:26.892 [2024-07-13 16:41:58.176945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:26.892 [2024-07-13 16:41:58.177050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.892 [2024-07-13 16:41:58.177092] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:26.892 [2024-07-13 16:41:58.177121] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.892 [2024-07-13 16:41:58.177576] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.892 [2024-07-13 16:41:58.177624] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:26.892 [2024-07-13 16:41:58.177704] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:26.892 [2024-07-13 16:41:58.177725] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:26.892 pt3 00:24:26.892 16:41:58 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:26.892 16:41:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:26.892 16:41:58 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:27.152 [2024-07-13 16:41:58.424969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:27.152 [2024-07-13 16:41:58.425082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.152 [2024-07-13 16:41:58.425117] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:27.152 [2024-07-13 16:41:58.425146] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.152 [2024-07-13 16:41:58.425615] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.152 [2024-07-13 16:41:58.425663] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:27.152 [2024-07-13 16:41:58.425738] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:24:27.152 [2024-07-13 16:41:58.425760] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:27.152 [2024-07-13 16:41:58.425900] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:24:27.152 [2024-07-13 16:41:58.425910] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:27.152 [2024-07-13 16:41:58.425981] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:24:27.152 [2024-07-13 16:41:58.426665] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:24:27.152 [2024-07-13 16:41:58.426688] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:24:27.152 [2024-07-13 16:41:58.426791] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.152 pt4 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.152 16:41:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.411 16:41:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:27.411 "name": "raid_bdev1", 00:24:27.411 "uuid": "1cc01b25-efdd-4745-99fb-a4b1bfae0fad", 00:24:27.411 "strip_size_kb": 64, 00:24:27.411 "state": "online", 00:24:27.411 "raid_level": "raid5f", 00:24:27.411 "superblock": true, 00:24:27.411 "num_base_bdevs": 4, 00:24:27.411 "num_base_bdevs_discovered": 4, 00:24:27.411 "num_base_bdevs_operational": 4, 00:24:27.411 "base_bdevs_list": [ 00:24:27.411 { 00:24:27.411 "name": "pt1", 00:24:27.411 "uuid": "11fbf0a3-eb6d-582e-9c2d-46aa4dfd0e96", 00:24:27.411 "is_configured": true, 00:24:27.411 "data_offset": 2048, 00:24:27.411 "data_size": 63488 00:24:27.411 }, 00:24:27.411 { 00:24:27.411 "name": "pt2", 00:24:27.411 "uuid": "30fc2bc6-f1d1-5cf3-b645-c9e6f8fd8499", 00:24:27.411 "is_configured": true, 00:24:27.411 "data_offset": 2048, 00:24:27.411 "data_size": 63488 00:24:27.411 }, 00:24:27.411 { 00:24:27.411 "name": "pt3", 00:24:27.411 "uuid": "6252dfc4-ee4f-588f-b3ac-4e9db4b5b6d2", 00:24:27.411 "is_configured": true, 00:24:27.411 "data_offset": 2048, 00:24:27.411 "data_size": 63488 00:24:27.411 }, 00:24:27.411 { 00:24:27.411 "name": "pt4", 00:24:27.411 "uuid": "d5adfad4-ad88-51b8-8a63-37e18d8b1b6c", 00:24:27.411 "is_configured": true, 00:24:27.411 "data_offset": 2048, 00:24:27.411 "data_size": 63488 00:24:27.411 } 00:24:27.411 ] 00:24:27.411 }' 00:24:27.411 16:41:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:27.411 16:41:58 -- common/autotest_common.sh@10 -- # set +x 00:24:27.979 16:41:59 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:27.979 16:41:59 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:24:28.238 [2024-07-13 16:41:59.525801] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:28.238 16:41:59 -- bdev/bdev_raid.sh@430 -- # '[' 1cc01b25-efdd-4745-99fb-a4b1bfae0fad '!=' 1cc01b25-efdd-4745-99fb-a4b1bfae0fad ']' 00:24:28.238 16:41:59 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:24:28.238 16:41:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:28.238 16:41:59 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:28.238 16:41:59 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:28.497 [2024-07-13 16:41:59.745734] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:28.497 16:41:59 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:28.497 16:41:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:28.497 16:41:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:28.497 16:41:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:28.497 16:41:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:28.497 16:41:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:28.497 16:41:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:28.497 16:41:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:28.497 16:41:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:28.497 16:41:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:28.497 16:41:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.497 16:41:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.756 16:42:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:28.756 "name": "raid_bdev1", 00:24:28.756 "uuid": "1cc01b25-efdd-4745-99fb-a4b1bfae0fad", 00:24:28.756 "strip_size_kb": 64, 00:24:28.756 "state": "online", 00:24:28.756 "raid_level": "raid5f", 00:24:28.756 "superblock": true, 00:24:28.756 "num_base_bdevs": 4, 00:24:28.756 "num_base_bdevs_discovered": 3, 00:24:28.756 "num_base_bdevs_operational": 3, 00:24:28.756 "base_bdevs_list": [ 00:24:28.756 { 00:24:28.756 "name": null, 00:24:28.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.756 "is_configured": false, 00:24:28.756 "data_offset": 2048, 00:24:28.756 "data_size": 63488 00:24:28.756 }, 00:24:28.756 { 00:24:28.756 "name": "pt2", 00:24:28.756 "uuid": "30fc2bc6-f1d1-5cf3-b645-c9e6f8fd8499", 00:24:28.756 "is_configured": true, 00:24:28.756 "data_offset": 2048, 00:24:28.756 "data_size": 63488 00:24:28.756 }, 00:24:28.756 { 00:24:28.756 "name": "pt3", 00:24:28.756 "uuid": "6252dfc4-ee4f-588f-b3ac-4e9db4b5b6d2", 00:24:28.756 "is_configured": true, 00:24:28.756 "data_offset": 2048, 00:24:28.756 "data_size": 63488 00:24:28.756 }, 00:24:28.756 { 00:24:28.756 "name": "pt4", 00:24:28.756 "uuid": "d5adfad4-ad88-51b8-8a63-37e18d8b1b6c", 00:24:28.756 "is_configured": true, 00:24:28.756 "data_offset": 2048, 00:24:28.756 "data_size": 63488 00:24:28.756 } 00:24:28.756 ] 00:24:28.756 }' 00:24:28.756 16:42:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:28.756 16:42:00 -- common/autotest_common.sh@10 -- # set +x 00:24:29.322 16:42:00 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:29.322 [2024-07-13 16:42:00.737900] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:29.322 [2024-07-13 16:42:00.737953] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:29.322 [2024-07-13 16:42:00.738043] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:29.322 [2024-07-13 16:42:00.738134] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:29.322 [2024-07-13 16:42:00.738144] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:24:29.322 16:42:00 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.322 16:42:00 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:24:29.581 16:42:01 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:24:29.581 16:42:01 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:24:29.581 16:42:01 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:24:29.581 16:42:01 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:29.581 16:42:01 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:29.840 16:42:01 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:29.840 16:42:01 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:29.840 16:42:01 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:30.099 16:42:01 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:30.099 16:42:01 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:30.099 16:42:01 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:30.357 16:42:01 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:30.357 16:42:01 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:30.357 16:42:01 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:24:30.357 16:42:01 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:30.357 16:42:01 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:30.615 [2024-07-13 16:42:01.902047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:30.615 [2024-07-13 16:42:01.902159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.615 [2024-07-13 16:42:01.902216] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:24:30.615 [2024-07-13 16:42:01.902247] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.615 [2024-07-13 16:42:01.904998] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.615 [2024-07-13 16:42:01.905068] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:30.615 [2024-07-13 16:42:01.905167] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:30.615 [2024-07-13 16:42:01.905202] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:30.615 pt2 00:24:30.615 16:42:01 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:30.615 16:42:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:30.615 16:42:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:30.615 16:42:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:30.615 16:42:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:30.615 16:42:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:30.615 16:42:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:30.615 16:42:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:30.615 16:42:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:30.615 16:42:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:30.615 16:42:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.615 16:42:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.874 16:42:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:30.874 "name": "raid_bdev1", 00:24:30.874 "uuid": "1cc01b25-efdd-4745-99fb-a4b1bfae0fad", 00:24:30.874 "strip_size_kb": 64, 00:24:30.874 "state": "configuring", 00:24:30.874 "raid_level": "raid5f", 00:24:30.874 "superblock": true, 00:24:30.874 "num_base_bdevs": 4, 00:24:30.874 "num_base_bdevs_discovered": 1, 00:24:30.874 "num_base_bdevs_operational": 3, 00:24:30.874 "base_bdevs_list": [ 00:24:30.874 { 00:24:30.874 "name": null, 00:24:30.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.874 "is_configured": false, 00:24:30.874 "data_offset": 2048, 00:24:30.874 "data_size": 63488 00:24:30.874 }, 00:24:30.874 { 00:24:30.874 "name": "pt2", 00:24:30.874 "uuid": "30fc2bc6-f1d1-5cf3-b645-c9e6f8fd8499", 00:24:30.874 "is_configured": true, 00:24:30.874 "data_offset": 2048, 00:24:30.874 "data_size": 63488 00:24:30.874 }, 00:24:30.874 { 00:24:30.874 "name": null, 00:24:30.874 "uuid": "6252dfc4-ee4f-588f-b3ac-4e9db4b5b6d2", 00:24:30.874 "is_configured": false, 00:24:30.874 "data_offset": 2048, 00:24:30.874 "data_size": 63488 00:24:30.874 }, 00:24:30.874 { 00:24:30.874 "name": null, 00:24:30.874 "uuid": "d5adfad4-ad88-51b8-8a63-37e18d8b1b6c", 00:24:30.874 "is_configured": false, 00:24:30.874 "data_offset": 2048, 00:24:30.874 "data_size": 63488 00:24:30.874 } 00:24:30.874 ] 00:24:30.874 }' 00:24:30.874 16:42:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:30.874 16:42:02 -- common/autotest_common.sh@10 -- # set +x 00:24:31.440 16:42:02 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:24:31.440 16:42:02 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:31.440 16:42:02 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:31.440 [2024-07-13 16:42:02.798227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:31.440 [2024-07-13 16:42:02.798621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.440 [2024-07-13 16:42:02.798764] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:31.440 [2024-07-13 16:42:02.798878] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.440 [2024-07-13 16:42:02.799397] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.440 [2024-07-13 16:42:02.799560] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:31.440 [2024-07-13 16:42:02.799740] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:31.440 [2024-07-13 16:42:02.799835] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:31.440 pt3 00:24:31.440 16:42:02 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:31.440 16:42:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:31.441 16:42:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:31.441 16:42:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:31.441 16:42:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:31.441 16:42:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:31.441 16:42:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:31.441 16:42:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:31.441 16:42:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:31.441 16:42:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:31.441 16:42:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.441 16:42:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.697 16:42:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:31.697 "name": "raid_bdev1", 00:24:31.697 "uuid": "1cc01b25-efdd-4745-99fb-a4b1bfae0fad", 00:24:31.697 "strip_size_kb": 64, 00:24:31.697 "state": "configuring", 00:24:31.697 "raid_level": "raid5f", 00:24:31.697 "superblock": true, 00:24:31.697 "num_base_bdevs": 4, 00:24:31.697 "num_base_bdevs_discovered": 2, 00:24:31.697 "num_base_bdevs_operational": 3, 00:24:31.697 "base_bdevs_list": [ 00:24:31.697 { 00:24:31.697 "name": null, 00:24:31.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.697 "is_configured": false, 00:24:31.697 "data_offset": 2048, 00:24:31.697 "data_size": 63488 00:24:31.697 }, 00:24:31.697 { 00:24:31.697 "name": "pt2", 00:24:31.697 "uuid": "30fc2bc6-f1d1-5cf3-b645-c9e6f8fd8499", 00:24:31.697 "is_configured": true, 00:24:31.697 "data_offset": 2048, 00:24:31.697 "data_size": 63488 00:24:31.697 }, 00:24:31.697 { 00:24:31.697 "name": "pt3", 00:24:31.697 "uuid": "6252dfc4-ee4f-588f-b3ac-4e9db4b5b6d2", 00:24:31.697 "is_configured": true, 00:24:31.697 "data_offset": 2048, 00:24:31.697 "data_size": 63488 00:24:31.697 }, 00:24:31.697 { 00:24:31.697 "name": null, 00:24:31.697 "uuid": "d5adfad4-ad88-51b8-8a63-37e18d8b1b6c", 00:24:31.697 "is_configured": false, 00:24:31.697 "data_offset": 2048, 00:24:31.697 "data_size": 63488 00:24:31.697 } 00:24:31.697 ] 00:24:31.697 }' 00:24:31.697 16:42:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:31.697 16:42:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.263 16:42:03 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:24:32.263 16:42:03 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:32.263 16:42:03 -- bdev/bdev_raid.sh@462 -- # i=3 00:24:32.263 16:42:03 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:32.522 [2024-07-13 16:42:03.790388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:32.522 [2024-07-13 16:42:03.790677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.522 [2024-07-13 16:42:03.790757] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:32.522 [2024-07-13 16:42:03.790848] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.522 [2024-07-13 16:42:03.791379] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.522 [2024-07-13 16:42:03.791522] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:32.522 [2024-07-13 16:42:03.791694] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:24:32.522 [2024-07-13 16:42:03.791794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:32.522 [2024-07-13 16:42:03.791974] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:24:32.522 [2024-07-13 16:42:03.792060] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:32.522 [2024-07-13 16:42:03.792159] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:24:32.522 [2024-07-13 16:42:03.793001] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:24:32.522 [2024-07-13 16:42:03.793121] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:24:32.522 [2024-07-13 16:42:03.793477] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.522 pt4 00:24:32.522 16:42:03 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:32.522 16:42:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:32.522 16:42:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:32.522 16:42:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:32.522 16:42:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:32.522 16:42:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:32.522 16:42:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:32.522 16:42:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:32.522 16:42:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:32.522 16:42:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:32.522 16:42:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.522 16:42:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.780 16:42:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:32.780 "name": "raid_bdev1", 00:24:32.780 "uuid": "1cc01b25-efdd-4745-99fb-a4b1bfae0fad", 00:24:32.780 "strip_size_kb": 64, 00:24:32.780 "state": "online", 00:24:32.780 "raid_level": "raid5f", 00:24:32.780 "superblock": true, 00:24:32.780 "num_base_bdevs": 4, 00:24:32.780 "num_base_bdevs_discovered": 3, 00:24:32.780 "num_base_bdevs_operational": 3, 00:24:32.780 "base_bdevs_list": [ 00:24:32.780 { 00:24:32.780 "name": null, 00:24:32.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.780 "is_configured": false, 00:24:32.780 "data_offset": 2048, 00:24:32.780 "data_size": 63488 00:24:32.780 }, 00:24:32.780 { 00:24:32.780 "name": "pt2", 00:24:32.780 "uuid": "30fc2bc6-f1d1-5cf3-b645-c9e6f8fd8499", 00:24:32.780 "is_configured": true, 00:24:32.780 "data_offset": 2048, 00:24:32.780 "data_size": 63488 00:24:32.780 }, 00:24:32.780 { 00:24:32.780 "name": "pt3", 00:24:32.780 "uuid": "6252dfc4-ee4f-588f-b3ac-4e9db4b5b6d2", 00:24:32.780 "is_configured": true, 00:24:32.780 "data_offset": 2048, 00:24:32.780 "data_size": 63488 00:24:32.780 }, 00:24:32.780 { 00:24:32.780 "name": "pt4", 00:24:32.780 "uuid": "d5adfad4-ad88-51b8-8a63-37e18d8b1b6c", 00:24:32.780 "is_configured": true, 00:24:32.780 "data_offset": 2048, 00:24:32.780 "data_size": 63488 00:24:32.780 } 00:24:32.780 ] 00:24:32.780 }' 00:24:32.780 16:42:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:32.780 16:42:04 -- common/autotest_common.sh@10 -- # set +x 00:24:33.347 16:42:04 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:24:33.347 16:42:04 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:33.605 [2024-07-13 16:42:04.856390] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.605 [2024-07-13 16:42:04.856554] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:33.605 [2024-07-13 16:42:04.856781] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.605 [2024-07-13 16:42:04.856899] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.605 [2024-07-13 16:42:04.857072] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:24:33.605 16:42:04 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.605 16:42:04 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:24:33.605 16:42:05 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:24:33.605 16:42:05 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:24:33.605 16:42:05 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:33.864 [2024-07-13 16:42:05.296490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:33.864 [2024-07-13 16:42:05.296825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.864 [2024-07-13 16:42:05.296919] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:33.864 [2024-07-13 16:42:05.297044] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.864 [2024-07-13 16:42:05.299903] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.864 [2024-07-13 16:42:05.300091] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:33.864 [2024-07-13 16:42:05.300272] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:33.864 [2024-07-13 16:42:05.300413] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:33.864 pt1 00:24:33.864 16:42:05 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:33.864 16:42:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:33.864 16:42:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:33.864 16:42:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:33.864 16:42:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:33.864 16:42:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:33.864 16:42:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:33.864 16:42:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:33.864 16:42:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:33.864 16:42:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:33.864 16:42:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.864 16:42:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.123 16:42:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:34.123 "name": "raid_bdev1", 00:24:34.123 "uuid": "1cc01b25-efdd-4745-99fb-a4b1bfae0fad", 00:24:34.123 "strip_size_kb": 64, 00:24:34.123 "state": "configuring", 00:24:34.123 "raid_level": "raid5f", 00:24:34.123 "superblock": true, 00:24:34.123 "num_base_bdevs": 4, 00:24:34.123 "num_base_bdevs_discovered": 1, 00:24:34.123 "num_base_bdevs_operational": 4, 00:24:34.124 "base_bdevs_list": [ 00:24:34.124 { 00:24:34.124 "name": "pt1", 00:24:34.124 "uuid": "11fbf0a3-eb6d-582e-9c2d-46aa4dfd0e96", 00:24:34.124 "is_configured": true, 00:24:34.124 "data_offset": 2048, 00:24:34.124 "data_size": 63488 00:24:34.124 }, 00:24:34.124 { 00:24:34.124 "name": null, 00:24:34.124 "uuid": "30fc2bc6-f1d1-5cf3-b645-c9e6f8fd8499", 00:24:34.124 "is_configured": false, 00:24:34.124 "data_offset": 2048, 00:24:34.124 "data_size": 63488 00:24:34.124 }, 00:24:34.124 { 00:24:34.124 "name": null, 00:24:34.124 "uuid": "6252dfc4-ee4f-588f-b3ac-4e9db4b5b6d2", 00:24:34.124 "is_configured": false, 00:24:34.124 "data_offset": 2048, 00:24:34.124 "data_size": 63488 00:24:34.124 }, 00:24:34.124 { 00:24:34.124 "name": null, 00:24:34.124 "uuid": "d5adfad4-ad88-51b8-8a63-37e18d8b1b6c", 00:24:34.124 "is_configured": false, 00:24:34.124 "data_offset": 2048, 00:24:34.124 "data_size": 63488 00:24:34.124 } 00:24:34.124 ] 00:24:34.124 }' 00:24:34.124 16:42:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:34.124 16:42:05 -- common/autotest_common.sh@10 -- # set +x 00:24:34.693 16:42:06 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:24:34.693 16:42:06 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:34.693 16:42:06 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:34.953 16:42:06 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:34.953 16:42:06 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:34.953 16:42:06 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:35.211 16:42:06 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:35.211 16:42:06 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:35.211 16:42:06 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:35.469 16:42:06 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:35.469 16:42:06 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:35.469 16:42:06 -- bdev/bdev_raid.sh@489 -- # i=3 00:24:35.469 16:42:06 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:35.469 [2024-07-13 16:42:06.860933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:35.469 [2024-07-13 16:42:06.861204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.469 [2024-07-13 16:42:06.861301] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:35.469 [2024-07-13 16:42:06.861421] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.469 [2024-07-13 16:42:06.861958] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.469 [2024-07-13 16:42:06.862122] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:35.469 [2024-07-13 16:42:06.862297] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:24:35.470 [2024-07-13 16:42:06.862382] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:35.470 [2024-07-13 16:42:06.862416] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:35.470 [2024-07-13 16:42:06.862514] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:24:35.470 [2024-07-13 16:42:06.862616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:35.470 pt4 00:24:35.470 16:42:06 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:35.470 16:42:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:35.470 16:42:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:35.470 16:42:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:35.470 16:42:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:35.470 16:42:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:35.470 16:42:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:35.470 16:42:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:35.470 16:42:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:35.470 16:42:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:35.470 16:42:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.470 16:42:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.748 16:42:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:35.748 "name": "raid_bdev1", 00:24:35.748 "uuid": "1cc01b25-efdd-4745-99fb-a4b1bfae0fad", 00:24:35.748 "strip_size_kb": 64, 00:24:35.748 "state": "configuring", 00:24:35.748 "raid_level": "raid5f", 00:24:35.748 "superblock": true, 00:24:35.748 "num_base_bdevs": 4, 00:24:35.748 "num_base_bdevs_discovered": 1, 00:24:35.748 "num_base_bdevs_operational": 3, 00:24:35.748 "base_bdevs_list": [ 00:24:35.748 { 00:24:35.748 "name": null, 00:24:35.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.748 "is_configured": false, 00:24:35.748 "data_offset": 2048, 00:24:35.748 "data_size": 63488 00:24:35.748 }, 00:24:35.748 { 00:24:35.748 "name": null, 00:24:35.748 "uuid": "30fc2bc6-f1d1-5cf3-b645-c9e6f8fd8499", 00:24:35.748 "is_configured": false, 00:24:35.748 "data_offset": 2048, 00:24:35.748 "data_size": 63488 00:24:35.748 }, 00:24:35.748 { 00:24:35.748 "name": null, 00:24:35.748 "uuid": "6252dfc4-ee4f-588f-b3ac-4e9db4b5b6d2", 00:24:35.748 "is_configured": false, 00:24:35.748 "data_offset": 2048, 00:24:35.748 "data_size": 63488 00:24:35.748 }, 00:24:35.748 { 00:24:35.748 "name": "pt4", 00:24:35.749 "uuid": "d5adfad4-ad88-51b8-8a63-37e18d8b1b6c", 00:24:35.749 "is_configured": true, 00:24:35.749 "data_offset": 2048, 00:24:35.749 "data_size": 63488 00:24:35.749 } 00:24:35.749 ] 00:24:35.749 }' 00:24:35.749 16:42:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:35.749 16:42:07 -- common/autotest_common.sh@10 -- # set +x 00:24:36.326 16:42:07 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:24:36.326 16:42:07 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:36.326 16:42:07 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:36.589 [2024-07-13 16:42:07.905184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:36.589 [2024-07-13 16:42:07.905519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.589 [2024-07-13 16:42:07.905599] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:36.589 [2024-07-13 16:42:07.905703] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.589 [2024-07-13 16:42:07.906252] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.590 [2024-07-13 16:42:07.906423] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:36.590 [2024-07-13 16:42:07.906617] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:36.590 [2024-07-13 16:42:07.906713] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:36.590 pt2 00:24:36.590 16:42:07 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:24:36.590 16:42:07 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:36.590 16:42:07 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:36.848 [2024-07-13 16:42:08.073190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:36.848 [2024-07-13 16:42:08.073481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.848 [2024-07-13 16:42:08.073552] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:24:36.848 [2024-07-13 16:42:08.073681] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.848 [2024-07-13 16:42:08.074177] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.848 [2024-07-13 16:42:08.074337] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:36.848 [2024-07-13 16:42:08.074505] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:36.848 [2024-07-13 16:42:08.074594] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:36.848 [2024-07-13 16:42:08.074759] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:24:36.848 [2024-07-13 16:42:08.074909] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:36.848 [2024-07-13 16:42:08.075024] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:24:36.848 [2024-07-13 16:42:08.076004] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:24:36.848 [2024-07-13 16:42:08.076114] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:24:36.848 [2024-07-13 16:42:08.076373] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.848 pt3 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.848 16:42:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.106 16:42:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:37.106 "name": "raid_bdev1", 00:24:37.106 "uuid": "1cc01b25-efdd-4745-99fb-a4b1bfae0fad", 00:24:37.106 "strip_size_kb": 64, 00:24:37.106 "state": "online", 00:24:37.106 "raid_level": "raid5f", 00:24:37.106 "superblock": true, 00:24:37.106 "num_base_bdevs": 4, 00:24:37.106 "num_base_bdevs_discovered": 3, 00:24:37.106 "num_base_bdevs_operational": 3, 00:24:37.106 "base_bdevs_list": [ 00:24:37.106 { 00:24:37.106 "name": null, 00:24:37.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.106 "is_configured": false, 00:24:37.106 "data_offset": 2048, 00:24:37.106 "data_size": 63488 00:24:37.106 }, 00:24:37.106 { 00:24:37.106 "name": "pt2", 00:24:37.106 "uuid": "30fc2bc6-f1d1-5cf3-b645-c9e6f8fd8499", 00:24:37.106 "is_configured": true, 00:24:37.106 "data_offset": 2048, 00:24:37.106 "data_size": 63488 00:24:37.106 }, 00:24:37.106 { 00:24:37.106 "name": "pt3", 00:24:37.106 "uuid": "6252dfc4-ee4f-588f-b3ac-4e9db4b5b6d2", 00:24:37.106 "is_configured": true, 00:24:37.106 "data_offset": 2048, 00:24:37.106 "data_size": 63488 00:24:37.106 }, 00:24:37.106 { 00:24:37.106 "name": "pt4", 00:24:37.106 "uuid": "d5adfad4-ad88-51b8-8a63-37e18d8b1b6c", 00:24:37.106 "is_configured": true, 00:24:37.106 "data_offset": 2048, 00:24:37.106 "data_size": 63488 00:24:37.106 } 00:24:37.106 ] 00:24:37.106 }' 00:24:37.106 16:42:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:37.106 16:42:08 -- common/autotest_common.sh@10 -- # set +x 00:24:37.680 16:42:08 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:37.680 16:42:08 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:24:37.680 [2024-07-13 16:42:09.115417] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:37.680 16:42:09 -- bdev/bdev_raid.sh@506 -- # '[' 1cc01b25-efdd-4745-99fb-a4b1bfae0fad '!=' 1cc01b25-efdd-4745-99fb-a4b1bfae0fad ']' 00:24:37.680 16:42:09 -- bdev/bdev_raid.sh@511 -- # killprocess 140821 00:24:37.680 16:42:09 -- common/autotest_common.sh@926 -- # '[' -z 140821 ']' 00:24:37.680 16:42:09 -- common/autotest_common.sh@930 -- # kill -0 140821 00:24:37.680 16:42:09 -- common/autotest_common.sh@931 -- # uname 00:24:37.941 16:42:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:37.941 16:42:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140821 00:24:37.941 killing process with pid 140821 00:24:37.941 16:42:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:37.941 16:42:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:37.941 16:42:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140821' 00:24:37.941 16:42:09 -- common/autotest_common.sh@945 -- # kill 140821 00:24:37.941 16:42:09 -- common/autotest_common.sh@950 -- # wait 140821 00:24:37.941 [2024-07-13 16:42:09.170775] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:37.941 [2024-07-13 16:42:09.170872] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:37.941 [2024-07-13 16:42:09.170971] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:37.941 [2024-07-13 16:42:09.170985] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:24:37.941 [2024-07-13 16:42:09.249391] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:38.198 16:42:09 -- bdev/bdev_raid.sh@513 -- # return 0 00:24:38.198 00:24:38.198 real 0m20.183s 00:24:38.198 user 0m36.329s 00:24:38.198 sys 0m3.713s 00:24:38.198 16:42:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:38.198 16:42:09 -- common/autotest_common.sh@10 -- # set +x 00:24:38.198 ************************************ 00:24:38.198 END TEST raid5f_superblock_test 00:24:38.198 ************************************ 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:24:38.456 16:42:09 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:38.456 16:42:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:38.456 16:42:09 -- common/autotest_common.sh@10 -- # set +x 00:24:38.456 ************************************ 00:24:38.456 START TEST raid5f_rebuild_test 00:24:38.456 ************************************ 00:24:38.456 16:42:09 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@544 -- # raid_pid=141471 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@545 -- # waitforlisten 141471 /var/tmp/spdk-raid.sock 00:24:38.456 16:42:09 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:38.456 16:42:09 -- common/autotest_common.sh@819 -- # '[' -z 141471 ']' 00:24:38.456 16:42:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:38.456 16:42:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:38.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:38.456 16:42:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:38.456 16:42:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:38.456 16:42:09 -- common/autotest_common.sh@10 -- # set +x 00:24:38.456 [2024-07-13 16:42:09.797686] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:38.456 [2024-07-13 16:42:09.798189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141471 ] 00:24:38.456 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:38.456 Zero copy mechanism will not be used. 00:24:38.713 [2024-07-13 16:42:09.952157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.713 [2024-07-13 16:42:10.031112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.713 [2024-07-13 16:42:10.108524] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:39.276 16:42:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:39.276 16:42:10 -- common/autotest_common.sh@852 -- # return 0 00:24:39.276 16:42:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:39.276 16:42:10 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:39.276 16:42:10 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:39.533 BaseBdev1 00:24:39.533 16:42:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:39.533 16:42:10 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:39.533 16:42:10 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:39.790 BaseBdev2 00:24:39.790 16:42:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:39.790 16:42:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:39.790 16:42:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:40.049 BaseBdev3 00:24:40.049 16:42:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:40.049 16:42:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:40.049 16:42:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:40.306 BaseBdev4 00:24:40.306 16:42:11 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:40.564 spare_malloc 00:24:40.564 16:42:11 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:40.821 spare_delay 00:24:40.822 16:42:12 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:40.822 [2024-07-13 16:42:12.263153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:40.822 [2024-07-13 16:42:12.263475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.822 [2024-07-13 16:42:12.263562] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:40.822 [2024-07-13 16:42:12.263707] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.822 [2024-07-13 16:42:12.266773] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.822 [2024-07-13 16:42:12.266966] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:40.822 spare 00:24:40.822 16:42:12 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:41.080 [2024-07-13 16:42:12.443399] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:41.080 [2024-07-13 16:42:12.446101] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:41.080 [2024-07-13 16:42:12.446279] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:41.080 [2024-07-13 16:42:12.446351] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:41.080 [2024-07-13 16:42:12.446563] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:24:41.080 [2024-07-13 16:42:12.446676] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:41.080 [2024-07-13 16:42:12.446912] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:24:41.080 [2024-07-13 16:42:12.447766] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:24:41.080 [2024-07-13 16:42:12.447886] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:24:41.080 [2024-07-13 16:42:12.448201] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.080 16:42:12 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:41.080 16:42:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:41.080 16:42:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:41.080 16:42:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:41.080 16:42:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:41.080 16:42:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:41.080 16:42:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:41.080 16:42:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:41.080 16:42:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:41.080 16:42:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:41.080 16:42:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.080 16:42:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.339 16:42:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:41.339 "name": "raid_bdev1", 00:24:41.339 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:41.339 "strip_size_kb": 64, 00:24:41.339 "state": "online", 00:24:41.339 "raid_level": "raid5f", 00:24:41.339 "superblock": false, 00:24:41.339 "num_base_bdevs": 4, 00:24:41.339 "num_base_bdevs_discovered": 4, 00:24:41.339 "num_base_bdevs_operational": 4, 00:24:41.339 "base_bdevs_list": [ 00:24:41.339 { 00:24:41.339 "name": "BaseBdev1", 00:24:41.339 "uuid": "98b414f0-1092-499b-9ed1-757131db4496", 00:24:41.339 "is_configured": true, 00:24:41.339 "data_offset": 0, 00:24:41.339 "data_size": 65536 00:24:41.339 }, 00:24:41.339 { 00:24:41.339 "name": "BaseBdev2", 00:24:41.339 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:41.339 "is_configured": true, 00:24:41.339 "data_offset": 0, 00:24:41.339 "data_size": 65536 00:24:41.339 }, 00:24:41.339 { 00:24:41.339 "name": "BaseBdev3", 00:24:41.339 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:41.339 "is_configured": true, 00:24:41.339 "data_offset": 0, 00:24:41.339 "data_size": 65536 00:24:41.339 }, 00:24:41.339 { 00:24:41.339 "name": "BaseBdev4", 00:24:41.339 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:41.339 "is_configured": true, 00:24:41.339 "data_offset": 0, 00:24:41.339 "data_size": 65536 00:24:41.339 } 00:24:41.339 ] 00:24:41.339 }' 00:24:41.339 16:42:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:41.339 16:42:12 -- common/autotest_common.sh@10 -- # set +x 00:24:41.907 16:42:13 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:41.907 16:42:13 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:42.165 [2024-07-13 16:42:13.480512] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:42.165 16:42:13 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:24:42.165 16:42:13 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.165 16:42:13 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:42.423 16:42:13 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:24:42.423 16:42:13 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:42.423 16:42:13 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:42.423 16:42:13 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:42.423 16:42:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:42.423 16:42:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:42.423 16:42:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:42.423 16:42:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:42.423 16:42:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:42.423 16:42:13 -- bdev/nbd_common.sh@12 -- # local i 00:24:42.423 16:42:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:42.423 16:42:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:42.423 16:42:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:42.681 [2024-07-13 16:42:13.980550] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:42.681 /dev/nbd0 00:24:42.681 16:42:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:42.681 16:42:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:42.681 16:42:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:42.681 16:42:14 -- common/autotest_common.sh@857 -- # local i 00:24:42.681 16:42:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:42.681 16:42:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:42.681 16:42:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:42.681 16:42:14 -- common/autotest_common.sh@861 -- # break 00:24:42.681 16:42:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:42.681 16:42:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:42.681 16:42:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:42.681 1+0 records in 00:24:42.681 1+0 records out 00:24:42.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596203 s, 6.9 MB/s 00:24:42.681 16:42:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.681 16:42:14 -- common/autotest_common.sh@874 -- # size=4096 00:24:42.681 16:42:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.681 16:42:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:42.681 16:42:14 -- common/autotest_common.sh@877 -- # return 0 00:24:42.681 16:42:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:42.681 16:42:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:42.681 16:42:14 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:42.681 16:42:14 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:24:42.681 16:42:14 -- bdev/bdev_raid.sh@582 -- # echo 192 00:24:42.681 16:42:14 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:24:43.247 512+0 records in 00:24:43.247 512+0 records out 00:24:43.247 100663296 bytes (101 MB, 96 MiB) copied, 0.483809 s, 208 MB/s 00:24:43.247 16:42:14 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:43.248 16:42:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:43.248 16:42:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:43.248 16:42:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:43.248 16:42:14 -- bdev/nbd_common.sh@51 -- # local i 00:24:43.248 16:42:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:43.248 16:42:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:43.506 16:42:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:43.506 [2024-07-13 16:42:14.739112] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:43.506 16:42:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:43.506 16:42:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:43.506 16:42:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:43.506 16:42:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:43.506 16:42:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:43.506 16:42:14 -- bdev/nbd_common.sh@41 -- # break 00:24:43.506 16:42:14 -- bdev/nbd_common.sh@45 -- # return 0 00:24:43.506 16:42:14 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:43.506 [2024-07-13 16:42:14.958736] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:43.765 16:42:14 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:43.765 16:42:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:43.765 16:42:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:43.765 16:42:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:43.765 16:42:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:43.765 16:42:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:43.765 16:42:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:43.765 16:42:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:43.765 16:42:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:43.765 16:42:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:43.765 16:42:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.765 16:42:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.023 16:42:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:44.023 "name": "raid_bdev1", 00:24:44.023 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:44.024 "strip_size_kb": 64, 00:24:44.024 "state": "online", 00:24:44.024 "raid_level": "raid5f", 00:24:44.024 "superblock": false, 00:24:44.024 "num_base_bdevs": 4, 00:24:44.024 "num_base_bdevs_discovered": 3, 00:24:44.024 "num_base_bdevs_operational": 3, 00:24:44.024 "base_bdevs_list": [ 00:24:44.024 { 00:24:44.024 "name": null, 00:24:44.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.024 "is_configured": false, 00:24:44.024 "data_offset": 0, 00:24:44.024 "data_size": 65536 00:24:44.024 }, 00:24:44.024 { 00:24:44.024 "name": "BaseBdev2", 00:24:44.024 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:44.024 "is_configured": true, 00:24:44.024 "data_offset": 0, 00:24:44.024 "data_size": 65536 00:24:44.024 }, 00:24:44.024 { 00:24:44.024 "name": "BaseBdev3", 00:24:44.024 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:44.024 "is_configured": true, 00:24:44.024 "data_offset": 0, 00:24:44.024 "data_size": 65536 00:24:44.024 }, 00:24:44.024 { 00:24:44.024 "name": "BaseBdev4", 00:24:44.024 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:44.024 "is_configured": true, 00:24:44.024 "data_offset": 0, 00:24:44.024 "data_size": 65536 00:24:44.024 } 00:24:44.024 ] 00:24:44.024 }' 00:24:44.024 16:42:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:44.024 16:42:15 -- common/autotest_common.sh@10 -- # set +x 00:24:44.591 16:42:15 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:44.591 [2024-07-13 16:42:16.027004] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:44.591 [2024-07-13 16:42:16.027224] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:44.591 [2024-07-13 16:42:16.033231] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027a60 00:24:44.591 [2024-07-13 16:42:16.036148] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:44.591 16:42:16 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:45.970 16:42:17 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.970 16:42:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:45.970 16:42:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:45.970 16:42:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:45.970 16:42:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:45.970 16:42:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.970 16:42:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.970 16:42:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:45.970 "name": "raid_bdev1", 00:24:45.970 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:45.970 "strip_size_kb": 64, 00:24:45.970 "state": "online", 00:24:45.970 "raid_level": "raid5f", 00:24:45.970 "superblock": false, 00:24:45.970 "num_base_bdevs": 4, 00:24:45.970 "num_base_bdevs_discovered": 4, 00:24:45.970 "num_base_bdevs_operational": 4, 00:24:45.970 "process": { 00:24:45.970 "type": "rebuild", 00:24:45.970 "target": "spare", 00:24:45.970 "progress": { 00:24:45.970 "blocks": 23040, 00:24:45.970 "percent": 11 00:24:45.970 } 00:24:45.970 }, 00:24:45.970 "base_bdevs_list": [ 00:24:45.970 { 00:24:45.970 "name": "spare", 00:24:45.970 "uuid": "cb289281-dc4b-5ab0-b783-992310c3ded2", 00:24:45.970 "is_configured": true, 00:24:45.970 "data_offset": 0, 00:24:45.970 "data_size": 65536 00:24:45.970 }, 00:24:45.970 { 00:24:45.970 "name": "BaseBdev2", 00:24:45.970 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:45.970 "is_configured": true, 00:24:45.970 "data_offset": 0, 00:24:45.970 "data_size": 65536 00:24:45.970 }, 00:24:45.970 { 00:24:45.970 "name": "BaseBdev3", 00:24:45.970 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:45.970 "is_configured": true, 00:24:45.970 "data_offset": 0, 00:24:45.970 "data_size": 65536 00:24:45.970 }, 00:24:45.970 { 00:24:45.970 "name": "BaseBdev4", 00:24:45.970 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:45.970 "is_configured": true, 00:24:45.970 "data_offset": 0, 00:24:45.970 "data_size": 65536 00:24:45.970 } 00:24:45.970 ] 00:24:45.970 }' 00:24:45.970 16:42:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:45.970 16:42:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:45.970 16:42:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:45.970 16:42:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.970 16:42:17 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:46.230 [2024-07-13 16:42:17.665796] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:46.489 [2024-07-13 16:42:17.747828] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:46.489 [2024-07-13 16:42:17.748083] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:46.489 16:42:17 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:46.489 16:42:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:46.489 16:42:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:46.489 16:42:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:46.489 16:42:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:46.489 16:42:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:46.489 16:42:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:46.489 16:42:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:46.489 16:42:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:46.489 16:42:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:46.489 16:42:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.489 16:42:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.749 16:42:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:46.749 "name": "raid_bdev1", 00:24:46.749 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:46.749 "strip_size_kb": 64, 00:24:46.749 "state": "online", 00:24:46.749 "raid_level": "raid5f", 00:24:46.749 "superblock": false, 00:24:46.749 "num_base_bdevs": 4, 00:24:46.749 "num_base_bdevs_discovered": 3, 00:24:46.749 "num_base_bdevs_operational": 3, 00:24:46.749 "base_bdevs_list": [ 00:24:46.749 { 00:24:46.749 "name": null, 00:24:46.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.749 "is_configured": false, 00:24:46.749 "data_offset": 0, 00:24:46.749 "data_size": 65536 00:24:46.749 }, 00:24:46.749 { 00:24:46.749 "name": "BaseBdev2", 00:24:46.749 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:46.749 "is_configured": true, 00:24:46.749 "data_offset": 0, 00:24:46.749 "data_size": 65536 00:24:46.749 }, 00:24:46.749 { 00:24:46.749 "name": "BaseBdev3", 00:24:46.749 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:46.749 "is_configured": true, 00:24:46.749 "data_offset": 0, 00:24:46.749 "data_size": 65536 00:24:46.749 }, 00:24:46.749 { 00:24:46.749 "name": "BaseBdev4", 00:24:46.749 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:46.749 "is_configured": true, 00:24:46.749 "data_offset": 0, 00:24:46.749 "data_size": 65536 00:24:46.749 } 00:24:46.749 ] 00:24:46.749 }' 00:24:46.749 16:42:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:46.749 16:42:17 -- common/autotest_common.sh@10 -- # set +x 00:24:47.318 16:42:18 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:47.318 16:42:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:47.318 16:42:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:47.318 16:42:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:47.318 16:42:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:47.318 16:42:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.318 16:42:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.318 16:42:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:47.318 "name": "raid_bdev1", 00:24:47.318 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:47.318 "strip_size_kb": 64, 00:24:47.318 "state": "online", 00:24:47.318 "raid_level": "raid5f", 00:24:47.318 "superblock": false, 00:24:47.318 "num_base_bdevs": 4, 00:24:47.318 "num_base_bdevs_discovered": 3, 00:24:47.318 "num_base_bdevs_operational": 3, 00:24:47.318 "base_bdevs_list": [ 00:24:47.318 { 00:24:47.318 "name": null, 00:24:47.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.318 "is_configured": false, 00:24:47.318 "data_offset": 0, 00:24:47.318 "data_size": 65536 00:24:47.318 }, 00:24:47.318 { 00:24:47.318 "name": "BaseBdev2", 00:24:47.318 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:47.318 "is_configured": true, 00:24:47.318 "data_offset": 0, 00:24:47.318 "data_size": 65536 00:24:47.318 }, 00:24:47.318 { 00:24:47.318 "name": "BaseBdev3", 00:24:47.318 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:47.318 "is_configured": true, 00:24:47.318 "data_offset": 0, 00:24:47.318 "data_size": 65536 00:24:47.318 }, 00:24:47.318 { 00:24:47.318 "name": "BaseBdev4", 00:24:47.318 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:47.318 "is_configured": true, 00:24:47.318 "data_offset": 0, 00:24:47.318 "data_size": 65536 00:24:47.318 } 00:24:47.318 ] 00:24:47.318 }' 00:24:47.318 16:42:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:47.589 16:42:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:47.589 16:42:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:47.589 16:42:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:47.589 16:42:18 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:47.863 [2024-07-13 16:42:19.085437] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:47.863 [2024-07-13 16:42:19.085666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:47.863 [2024-07-13 16:42:19.091571] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027c00 00:24:47.863 [2024-07-13 16:42:19.094425] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:47.863 16:42:19 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:48.799 16:42:20 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:48.799 16:42:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:48.799 16:42:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:48.799 16:42:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:48.799 16:42:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:48.799 16:42:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.799 16:42:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:49.057 "name": "raid_bdev1", 00:24:49.057 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:49.057 "strip_size_kb": 64, 00:24:49.057 "state": "online", 00:24:49.057 "raid_level": "raid5f", 00:24:49.057 "superblock": false, 00:24:49.057 "num_base_bdevs": 4, 00:24:49.057 "num_base_bdevs_discovered": 4, 00:24:49.057 "num_base_bdevs_operational": 4, 00:24:49.057 "process": { 00:24:49.057 "type": "rebuild", 00:24:49.057 "target": "spare", 00:24:49.057 "progress": { 00:24:49.057 "blocks": 23040, 00:24:49.057 "percent": 11 00:24:49.057 } 00:24:49.057 }, 00:24:49.057 "base_bdevs_list": [ 00:24:49.057 { 00:24:49.057 "name": "spare", 00:24:49.057 "uuid": "cb289281-dc4b-5ab0-b783-992310c3ded2", 00:24:49.057 "is_configured": true, 00:24:49.057 "data_offset": 0, 00:24:49.057 "data_size": 65536 00:24:49.057 }, 00:24:49.057 { 00:24:49.057 "name": "BaseBdev2", 00:24:49.057 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:49.057 "is_configured": true, 00:24:49.057 "data_offset": 0, 00:24:49.057 "data_size": 65536 00:24:49.057 }, 00:24:49.057 { 00:24:49.057 "name": "BaseBdev3", 00:24:49.057 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:49.057 "is_configured": true, 00:24:49.057 "data_offset": 0, 00:24:49.057 "data_size": 65536 00:24:49.057 }, 00:24:49.057 { 00:24:49.057 "name": "BaseBdev4", 00:24:49.057 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:49.057 "is_configured": true, 00:24:49.057 "data_offset": 0, 00:24:49.057 "data_size": 65536 00:24:49.057 } 00:24:49.057 ] 00:24:49.057 }' 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@657 -- # local timeout=676 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.057 16:42:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.316 16:42:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:49.316 "name": "raid_bdev1", 00:24:49.316 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:49.316 "strip_size_kb": 64, 00:24:49.316 "state": "online", 00:24:49.316 "raid_level": "raid5f", 00:24:49.316 "superblock": false, 00:24:49.316 "num_base_bdevs": 4, 00:24:49.316 "num_base_bdevs_discovered": 4, 00:24:49.316 "num_base_bdevs_operational": 4, 00:24:49.316 "process": { 00:24:49.316 "type": "rebuild", 00:24:49.316 "target": "spare", 00:24:49.316 "progress": { 00:24:49.316 "blocks": 26880, 00:24:49.316 "percent": 13 00:24:49.316 } 00:24:49.316 }, 00:24:49.316 "base_bdevs_list": [ 00:24:49.316 { 00:24:49.316 "name": "spare", 00:24:49.316 "uuid": "cb289281-dc4b-5ab0-b783-992310c3ded2", 00:24:49.316 "is_configured": true, 00:24:49.316 "data_offset": 0, 00:24:49.316 "data_size": 65536 00:24:49.316 }, 00:24:49.316 { 00:24:49.316 "name": "BaseBdev2", 00:24:49.316 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:49.316 "is_configured": true, 00:24:49.316 "data_offset": 0, 00:24:49.316 "data_size": 65536 00:24:49.316 }, 00:24:49.316 { 00:24:49.316 "name": "BaseBdev3", 00:24:49.316 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:49.316 "is_configured": true, 00:24:49.316 "data_offset": 0, 00:24:49.316 "data_size": 65536 00:24:49.316 }, 00:24:49.316 { 00:24:49.316 "name": "BaseBdev4", 00:24:49.316 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:49.316 "is_configured": true, 00:24:49.316 "data_offset": 0, 00:24:49.316 "data_size": 65536 00:24:49.316 } 00:24:49.316 ] 00:24:49.316 }' 00:24:49.316 16:42:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:49.316 16:42:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:49.316 16:42:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:49.316 16:42:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:49.316 16:42:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:50.251 16:42:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:50.251 16:42:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:50.251 16:42:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:50.251 16:42:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:50.251 16:42:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:50.251 16:42:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:50.251 16:42:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.251 16:42:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.510 16:42:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:50.510 "name": "raid_bdev1", 00:24:50.510 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:50.510 "strip_size_kb": 64, 00:24:50.510 "state": "online", 00:24:50.510 "raid_level": "raid5f", 00:24:50.510 "superblock": false, 00:24:50.510 "num_base_bdevs": 4, 00:24:50.510 "num_base_bdevs_discovered": 4, 00:24:50.510 "num_base_bdevs_operational": 4, 00:24:50.510 "process": { 00:24:50.510 "type": "rebuild", 00:24:50.510 "target": "spare", 00:24:50.510 "progress": { 00:24:50.510 "blocks": 53760, 00:24:50.510 "percent": 27 00:24:50.510 } 00:24:50.510 }, 00:24:50.510 "base_bdevs_list": [ 00:24:50.510 { 00:24:50.510 "name": "spare", 00:24:50.510 "uuid": "cb289281-dc4b-5ab0-b783-992310c3ded2", 00:24:50.510 "is_configured": true, 00:24:50.510 "data_offset": 0, 00:24:50.510 "data_size": 65536 00:24:50.510 }, 00:24:50.510 { 00:24:50.510 "name": "BaseBdev2", 00:24:50.510 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:50.510 "is_configured": true, 00:24:50.510 "data_offset": 0, 00:24:50.510 "data_size": 65536 00:24:50.510 }, 00:24:50.510 { 00:24:50.510 "name": "BaseBdev3", 00:24:50.510 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:50.510 "is_configured": true, 00:24:50.510 "data_offset": 0, 00:24:50.510 "data_size": 65536 00:24:50.510 }, 00:24:50.510 { 00:24:50.510 "name": "BaseBdev4", 00:24:50.510 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:50.510 "is_configured": true, 00:24:50.510 "data_offset": 0, 00:24:50.510 "data_size": 65536 00:24:50.510 } 00:24:50.510 ] 00:24:50.510 }' 00:24:50.510 16:42:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:50.768 16:42:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:50.768 16:42:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:50.768 16:42:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:50.768 16:42:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:51.702 16:42:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:51.702 16:42:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:51.702 16:42:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:51.702 16:42:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:51.702 16:42:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:51.702 16:42:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:51.702 16:42:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.702 16:42:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.960 16:42:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:51.960 "name": "raid_bdev1", 00:24:51.960 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:51.960 "strip_size_kb": 64, 00:24:51.960 "state": "online", 00:24:51.960 "raid_level": "raid5f", 00:24:51.960 "superblock": false, 00:24:51.960 "num_base_bdevs": 4, 00:24:51.960 "num_base_bdevs_discovered": 4, 00:24:51.960 "num_base_bdevs_operational": 4, 00:24:51.960 "process": { 00:24:51.960 "type": "rebuild", 00:24:51.960 "target": "spare", 00:24:51.960 "progress": { 00:24:51.960 "blocks": 78720, 00:24:51.960 "percent": 40 00:24:51.960 } 00:24:51.960 }, 00:24:51.960 "base_bdevs_list": [ 00:24:51.960 { 00:24:51.960 "name": "spare", 00:24:51.960 "uuid": "cb289281-dc4b-5ab0-b783-992310c3ded2", 00:24:51.960 "is_configured": true, 00:24:51.960 "data_offset": 0, 00:24:51.960 "data_size": 65536 00:24:51.960 }, 00:24:51.960 { 00:24:51.960 "name": "BaseBdev2", 00:24:51.960 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:51.960 "is_configured": true, 00:24:51.960 "data_offset": 0, 00:24:51.960 "data_size": 65536 00:24:51.960 }, 00:24:51.960 { 00:24:51.960 "name": "BaseBdev3", 00:24:51.960 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:51.960 "is_configured": true, 00:24:51.960 "data_offset": 0, 00:24:51.960 "data_size": 65536 00:24:51.960 }, 00:24:51.960 { 00:24:51.960 "name": "BaseBdev4", 00:24:51.960 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:51.960 "is_configured": true, 00:24:51.960 "data_offset": 0, 00:24:51.960 "data_size": 65536 00:24:51.960 } 00:24:51.960 ] 00:24:51.960 }' 00:24:51.960 16:42:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:51.960 16:42:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:51.960 16:42:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:51.960 16:42:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:51.960 16:42:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:53.334 "name": "raid_bdev1", 00:24:53.334 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:53.334 "strip_size_kb": 64, 00:24:53.334 "state": "online", 00:24:53.334 "raid_level": "raid5f", 00:24:53.334 "superblock": false, 00:24:53.334 "num_base_bdevs": 4, 00:24:53.334 "num_base_bdevs_discovered": 4, 00:24:53.334 "num_base_bdevs_operational": 4, 00:24:53.334 "process": { 00:24:53.334 "type": "rebuild", 00:24:53.334 "target": "spare", 00:24:53.334 "progress": { 00:24:53.334 "blocks": 103680, 00:24:53.334 "percent": 52 00:24:53.334 } 00:24:53.334 }, 00:24:53.334 "base_bdevs_list": [ 00:24:53.334 { 00:24:53.334 "name": "spare", 00:24:53.334 "uuid": "cb289281-dc4b-5ab0-b783-992310c3ded2", 00:24:53.334 "is_configured": true, 00:24:53.334 "data_offset": 0, 00:24:53.334 "data_size": 65536 00:24:53.334 }, 00:24:53.334 { 00:24:53.334 "name": "BaseBdev2", 00:24:53.334 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:53.334 "is_configured": true, 00:24:53.334 "data_offset": 0, 00:24:53.334 "data_size": 65536 00:24:53.334 }, 00:24:53.334 { 00:24:53.334 "name": "BaseBdev3", 00:24:53.334 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:53.334 "is_configured": true, 00:24:53.334 "data_offset": 0, 00:24:53.334 "data_size": 65536 00:24:53.334 }, 00:24:53.334 { 00:24:53.334 "name": "BaseBdev4", 00:24:53.334 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:53.334 "is_configured": true, 00:24:53.334 "data_offset": 0, 00:24:53.334 "data_size": 65536 00:24:53.334 } 00:24:53.334 ] 00:24:53.334 }' 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:53.334 16:42:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:54.269 16:42:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:54.269 16:42:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:54.269 16:42:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:54.269 16:42:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:54.269 16:42:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:54.269 16:42:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:54.269 16:42:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.269 16:42:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.528 16:42:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:54.528 "name": "raid_bdev1", 00:24:54.528 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:54.528 "strip_size_kb": 64, 00:24:54.528 "state": "online", 00:24:54.528 "raid_level": "raid5f", 00:24:54.528 "superblock": false, 00:24:54.528 "num_base_bdevs": 4, 00:24:54.528 "num_base_bdevs_discovered": 4, 00:24:54.528 "num_base_bdevs_operational": 4, 00:24:54.528 "process": { 00:24:54.528 "type": "rebuild", 00:24:54.528 "target": "spare", 00:24:54.528 "progress": { 00:24:54.528 "blocks": 130560, 00:24:54.528 "percent": 66 00:24:54.528 } 00:24:54.528 }, 00:24:54.528 "base_bdevs_list": [ 00:24:54.528 { 00:24:54.528 "name": "spare", 00:24:54.528 "uuid": "cb289281-dc4b-5ab0-b783-992310c3ded2", 00:24:54.528 "is_configured": true, 00:24:54.528 "data_offset": 0, 00:24:54.528 "data_size": 65536 00:24:54.528 }, 00:24:54.528 { 00:24:54.528 "name": "BaseBdev2", 00:24:54.528 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:54.528 "is_configured": true, 00:24:54.528 "data_offset": 0, 00:24:54.528 "data_size": 65536 00:24:54.528 }, 00:24:54.528 { 00:24:54.528 "name": "BaseBdev3", 00:24:54.528 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:54.528 "is_configured": true, 00:24:54.528 "data_offset": 0, 00:24:54.528 "data_size": 65536 00:24:54.528 }, 00:24:54.528 { 00:24:54.528 "name": "BaseBdev4", 00:24:54.528 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:54.528 "is_configured": true, 00:24:54.528 "data_offset": 0, 00:24:54.528 "data_size": 65536 00:24:54.528 } 00:24:54.528 ] 00:24:54.528 }' 00:24:54.528 16:42:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:54.786 16:42:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:54.786 16:42:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:54.786 16:42:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:54.786 16:42:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:55.721 16:42:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:55.721 16:42:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:55.721 16:42:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:55.721 16:42:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:55.721 16:42:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:55.721 16:42:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:55.721 16:42:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.721 16:42:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.981 16:42:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:55.981 "name": "raid_bdev1", 00:24:55.981 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:55.981 "strip_size_kb": 64, 00:24:55.981 "state": "online", 00:24:55.981 "raid_level": "raid5f", 00:24:55.981 "superblock": false, 00:24:55.981 "num_base_bdevs": 4, 00:24:55.981 "num_base_bdevs_discovered": 4, 00:24:55.981 "num_base_bdevs_operational": 4, 00:24:55.981 "process": { 00:24:55.981 "type": "rebuild", 00:24:55.981 "target": "spare", 00:24:55.981 "progress": { 00:24:55.981 "blocks": 155520, 00:24:55.981 "percent": 79 00:24:55.981 } 00:24:55.981 }, 00:24:55.981 "base_bdevs_list": [ 00:24:55.981 { 00:24:55.981 "name": "spare", 00:24:55.981 "uuid": "cb289281-dc4b-5ab0-b783-992310c3ded2", 00:24:55.981 "is_configured": true, 00:24:55.981 "data_offset": 0, 00:24:55.981 "data_size": 65536 00:24:55.981 }, 00:24:55.981 { 00:24:55.981 "name": "BaseBdev2", 00:24:55.981 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:55.981 "is_configured": true, 00:24:55.981 "data_offset": 0, 00:24:55.981 "data_size": 65536 00:24:55.981 }, 00:24:55.981 { 00:24:55.981 "name": "BaseBdev3", 00:24:55.981 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:55.981 "is_configured": true, 00:24:55.981 "data_offset": 0, 00:24:55.981 "data_size": 65536 00:24:55.981 }, 00:24:55.981 { 00:24:55.981 "name": "BaseBdev4", 00:24:55.981 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:55.981 "is_configured": true, 00:24:55.981 "data_offset": 0, 00:24:55.981 "data_size": 65536 00:24:55.981 } 00:24:55.981 ] 00:24:55.981 }' 00:24:55.981 16:42:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:55.981 16:42:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:55.981 16:42:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:55.981 16:42:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:55.981 16:42:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:57.360 "name": "raid_bdev1", 00:24:57.360 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:57.360 "strip_size_kb": 64, 00:24:57.360 "state": "online", 00:24:57.360 "raid_level": "raid5f", 00:24:57.360 "superblock": false, 00:24:57.360 "num_base_bdevs": 4, 00:24:57.360 "num_base_bdevs_discovered": 4, 00:24:57.360 "num_base_bdevs_operational": 4, 00:24:57.360 "process": { 00:24:57.360 "type": "rebuild", 00:24:57.360 "target": "spare", 00:24:57.360 "progress": { 00:24:57.360 "blocks": 180480, 00:24:57.360 "percent": 91 00:24:57.360 } 00:24:57.360 }, 00:24:57.360 "base_bdevs_list": [ 00:24:57.360 { 00:24:57.360 "name": "spare", 00:24:57.360 "uuid": "cb289281-dc4b-5ab0-b783-992310c3ded2", 00:24:57.360 "is_configured": true, 00:24:57.360 "data_offset": 0, 00:24:57.360 "data_size": 65536 00:24:57.360 }, 00:24:57.360 { 00:24:57.360 "name": "BaseBdev2", 00:24:57.360 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:57.360 "is_configured": true, 00:24:57.360 "data_offset": 0, 00:24:57.360 "data_size": 65536 00:24:57.360 }, 00:24:57.360 { 00:24:57.360 "name": "BaseBdev3", 00:24:57.360 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:57.360 "is_configured": true, 00:24:57.360 "data_offset": 0, 00:24:57.360 "data_size": 65536 00:24:57.360 }, 00:24:57.360 { 00:24:57.360 "name": "BaseBdev4", 00:24:57.360 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:57.360 "is_configured": true, 00:24:57.360 "data_offset": 0, 00:24:57.360 "data_size": 65536 00:24:57.360 } 00:24:57.360 ] 00:24:57.360 }' 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:57.360 16:42:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:58.295 [2024-07-13 16:42:29.461573] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:58.295 [2024-07-13 16:42:29.461815] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:58.295 [2024-07-13 16:42:29.462039] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.295 16:42:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:58.295 16:42:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:58.295 16:42:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:58.295 16:42:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:58.295 16:42:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:58.295 16:42:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:58.295 16:42:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.295 16:42:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.553 16:42:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:58.553 "name": "raid_bdev1", 00:24:58.553 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:58.553 "strip_size_kb": 64, 00:24:58.553 "state": "online", 00:24:58.553 "raid_level": "raid5f", 00:24:58.553 "superblock": false, 00:24:58.553 "num_base_bdevs": 4, 00:24:58.553 "num_base_bdevs_discovered": 4, 00:24:58.553 "num_base_bdevs_operational": 4, 00:24:58.553 "base_bdevs_list": [ 00:24:58.553 { 00:24:58.553 "name": "spare", 00:24:58.553 "uuid": "cb289281-dc4b-5ab0-b783-992310c3ded2", 00:24:58.553 "is_configured": true, 00:24:58.553 "data_offset": 0, 00:24:58.553 "data_size": 65536 00:24:58.553 }, 00:24:58.553 { 00:24:58.553 "name": "BaseBdev2", 00:24:58.553 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:58.553 "is_configured": true, 00:24:58.553 "data_offset": 0, 00:24:58.553 "data_size": 65536 00:24:58.553 }, 00:24:58.553 { 00:24:58.553 "name": "BaseBdev3", 00:24:58.553 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:58.553 "is_configured": true, 00:24:58.553 "data_offset": 0, 00:24:58.553 "data_size": 65536 00:24:58.553 }, 00:24:58.553 { 00:24:58.553 "name": "BaseBdev4", 00:24:58.553 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:58.553 "is_configured": true, 00:24:58.553 "data_offset": 0, 00:24:58.553 "data_size": 65536 00:24:58.553 } 00:24:58.553 ] 00:24:58.553 }' 00:24:58.553 16:42:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:58.811 16:42:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:58.811 16:42:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:58.811 16:42:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:58.811 16:42:30 -- bdev/bdev_raid.sh@660 -- # break 00:24:58.811 16:42:30 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:58.811 16:42:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:58.811 16:42:30 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:58.811 16:42:30 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:58.811 16:42:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:58.811 16:42:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.811 16:42:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:59.070 "name": "raid_bdev1", 00:24:59.070 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:59.070 "strip_size_kb": 64, 00:24:59.070 "state": "online", 00:24:59.070 "raid_level": "raid5f", 00:24:59.070 "superblock": false, 00:24:59.070 "num_base_bdevs": 4, 00:24:59.070 "num_base_bdevs_discovered": 4, 00:24:59.070 "num_base_bdevs_operational": 4, 00:24:59.070 "base_bdevs_list": [ 00:24:59.070 { 00:24:59.070 "name": "spare", 00:24:59.070 "uuid": "cb289281-dc4b-5ab0-b783-992310c3ded2", 00:24:59.070 "is_configured": true, 00:24:59.070 "data_offset": 0, 00:24:59.070 "data_size": 65536 00:24:59.070 }, 00:24:59.070 { 00:24:59.070 "name": "BaseBdev2", 00:24:59.070 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:59.070 "is_configured": true, 00:24:59.070 "data_offset": 0, 00:24:59.070 "data_size": 65536 00:24:59.070 }, 00:24:59.070 { 00:24:59.070 "name": "BaseBdev3", 00:24:59.070 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:59.070 "is_configured": true, 00:24:59.070 "data_offset": 0, 00:24:59.070 "data_size": 65536 00:24:59.070 }, 00:24:59.070 { 00:24:59.070 "name": "BaseBdev4", 00:24:59.070 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:59.070 "is_configured": true, 00:24:59.070 "data_offset": 0, 00:24:59.070 "data_size": 65536 00:24:59.070 } 00:24:59.070 ] 00:24:59.070 }' 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:59.070 16:42:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:59.071 16:42:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:59.071 16:42:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.071 16:42:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.330 16:42:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:59.330 "name": "raid_bdev1", 00:24:59.330 "uuid": "dd2b595c-65e7-4bd5-af8c-584cfe5d0d20", 00:24:59.330 "strip_size_kb": 64, 00:24:59.330 "state": "online", 00:24:59.330 "raid_level": "raid5f", 00:24:59.330 "superblock": false, 00:24:59.330 "num_base_bdevs": 4, 00:24:59.330 "num_base_bdevs_discovered": 4, 00:24:59.330 "num_base_bdevs_operational": 4, 00:24:59.330 "base_bdevs_list": [ 00:24:59.330 { 00:24:59.330 "name": "spare", 00:24:59.330 "uuid": "cb289281-dc4b-5ab0-b783-992310c3ded2", 00:24:59.330 "is_configured": true, 00:24:59.330 "data_offset": 0, 00:24:59.330 "data_size": 65536 00:24:59.330 }, 00:24:59.330 { 00:24:59.330 "name": "BaseBdev2", 00:24:59.330 "uuid": "5c7236be-1e67-4d2f-9ae5-5bc8f6f51d0c", 00:24:59.330 "is_configured": true, 00:24:59.330 "data_offset": 0, 00:24:59.330 "data_size": 65536 00:24:59.330 }, 00:24:59.330 { 00:24:59.330 "name": "BaseBdev3", 00:24:59.330 "uuid": "d60178f7-dc24-4d1f-b2ab-c95e37b28868", 00:24:59.330 "is_configured": true, 00:24:59.330 "data_offset": 0, 00:24:59.330 "data_size": 65536 00:24:59.330 }, 00:24:59.330 { 00:24:59.330 "name": "BaseBdev4", 00:24:59.330 "uuid": "e0b694a6-c03c-4321-bac6-dae3c28da27b", 00:24:59.330 "is_configured": true, 00:24:59.330 "data_offset": 0, 00:24:59.330 "data_size": 65536 00:24:59.330 } 00:24:59.330 ] 00:24:59.330 }' 00:24:59.330 16:42:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:59.330 16:42:30 -- common/autotest_common.sh@10 -- # set +x 00:24:59.898 16:42:31 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:00.157 [2024-07-13 16:42:31.467354] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:00.157 [2024-07-13 16:42:31.467528] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:00.157 [2024-07-13 16:42:31.467809] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:00.157 [2024-07-13 16:42:31.468004] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:00.157 [2024-07-13 16:42:31.468084] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:25:00.157 16:42:31 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:00.157 16:42:31 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.416 16:42:31 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:00.416 16:42:31 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:00.416 16:42:31 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:00.416 16:42:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:00.416 16:42:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:00.416 16:42:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:00.416 16:42:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:00.416 16:42:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:00.416 16:42:31 -- bdev/nbd_common.sh@12 -- # local i 00:25:00.416 16:42:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:00.416 16:42:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:00.416 16:42:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:00.416 /dev/nbd0 00:25:00.416 16:42:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:00.417 16:42:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:00.417 16:42:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:00.417 16:42:31 -- common/autotest_common.sh@857 -- # local i 00:25:00.417 16:42:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:00.417 16:42:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:00.417 16:42:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:00.417 16:42:31 -- common/autotest_common.sh@861 -- # break 00:25:00.417 16:42:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:00.417 16:42:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:00.417 16:42:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:00.417 1+0 records in 00:25:00.417 1+0 records out 00:25:00.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446578 s, 9.2 MB/s 00:25:00.417 16:42:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.676 16:42:31 -- common/autotest_common.sh@874 -- # size=4096 00:25:00.676 16:42:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.676 16:42:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:00.676 16:42:31 -- common/autotest_common.sh@877 -- # return 0 00:25:00.676 16:42:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:00.676 16:42:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:00.676 16:42:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:00.676 /dev/nbd1 00:25:00.676 16:42:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:00.676 16:42:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:00.676 16:42:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:00.676 16:42:32 -- common/autotest_common.sh@857 -- # local i 00:25:00.676 16:42:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:00.676 16:42:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:00.676 16:42:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:00.676 16:42:32 -- common/autotest_common.sh@861 -- # break 00:25:00.676 16:42:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:00.676 16:42:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:00.676 16:42:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:00.676 1+0 records in 00:25:00.676 1+0 records out 00:25:00.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386378 s, 10.6 MB/s 00:25:00.676 16:42:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.676 16:42:32 -- common/autotest_common.sh@874 -- # size=4096 00:25:00.676 16:42:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.676 16:42:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:00.676 16:42:32 -- common/autotest_common.sh@877 -- # return 0 00:25:00.676 16:42:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:00.676 16:42:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:00.676 16:42:32 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:00.935 16:42:32 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:00.936 16:42:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:00.936 16:42:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:00.936 16:42:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:00.936 16:42:32 -- bdev/nbd_common.sh@51 -- # local i 00:25:00.936 16:42:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:00.936 16:42:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:01.194 16:42:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:01.194 16:42:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:01.194 16:42:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:01.194 16:42:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:01.194 16:42:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:01.194 16:42:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:01.194 16:42:32 -- bdev/nbd_common.sh@41 -- # break 00:25:01.194 16:42:32 -- bdev/nbd_common.sh@45 -- # return 0 00:25:01.194 16:42:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:01.194 16:42:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:01.453 16:42:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:01.453 16:42:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:01.453 16:42:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:01.453 16:42:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:01.453 16:42:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:01.453 16:42:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:01.453 16:42:32 -- bdev/nbd_common.sh@41 -- # break 00:25:01.453 16:42:32 -- bdev/nbd_common.sh@45 -- # return 0 00:25:01.453 16:42:32 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:01.453 16:42:32 -- bdev/bdev_raid.sh@709 -- # killprocess 141471 00:25:01.453 16:42:32 -- common/autotest_common.sh@926 -- # '[' -z 141471 ']' 00:25:01.453 16:42:32 -- common/autotest_common.sh@930 -- # kill -0 141471 00:25:01.453 16:42:32 -- common/autotest_common.sh@931 -- # uname 00:25:01.453 16:42:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:01.453 16:42:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141471 00:25:01.453 16:42:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:01.453 killing process with pid 141471 00:25:01.453 16:42:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:01.453 16:42:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141471' 00:25:01.453 Received shutdown signal, test time was about 60.000000 seconds 00:25:01.453 00:25:01.453 Latency(us) 00:25:01.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.453 =================================================================================================================== 00:25:01.453 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:01.453 16:42:32 -- common/autotest_common.sh@945 -- # kill 141471 00:25:01.453 [2024-07-13 16:42:32.807254] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:01.453 16:42:32 -- common/autotest_common.sh@950 -- # wait 141471 00:25:01.453 [2024-07-13 16:42:32.892608] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:02.020 00:25:02.020 real 0m23.574s 00:25:02.020 user 0m33.510s 00:25:02.020 sys 0m3.701s 00:25:02.020 16:42:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.020 ************************************ 00:25:02.020 END TEST raid5f_rebuild_test 00:25:02.020 ************************************ 00:25:02.020 16:42:33 -- common/autotest_common.sh@10 -- # set +x 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:25:02.020 16:42:33 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:02.020 16:42:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:02.020 16:42:33 -- common/autotest_common.sh@10 -- # set +x 00:25:02.020 ************************************ 00:25:02.020 START TEST raid5f_rebuild_test_sb 00:25:02.020 ************************************ 00:25:02.020 16:42:33 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@544 -- # raid_pid=142067 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@545 -- # waitforlisten 142067 /var/tmp/spdk-raid.sock 00:25:02.020 16:42:33 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:02.020 16:42:33 -- common/autotest_common.sh@819 -- # '[' -z 142067 ']' 00:25:02.020 16:42:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:02.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:02.020 16:42:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:02.021 16:42:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:02.021 16:42:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:02.021 16:42:33 -- common/autotest_common.sh@10 -- # set +x 00:25:02.021 [2024-07-13 16:42:33.434470] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:02.021 [2024-07-13 16:42:33.434664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142067 ] 00:25:02.021 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:02.021 Zero copy mechanism will not be used. 00:25:02.279 [2024-07-13 16:42:33.579987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.279 [2024-07-13 16:42:33.657454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.279 [2024-07-13 16:42:33.734684] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:03.213 16:42:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:03.213 16:42:34 -- common/autotest_common.sh@852 -- # return 0 00:25:03.213 16:42:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:03.213 16:42:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:03.213 16:42:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:03.213 BaseBdev1_malloc 00:25:03.213 16:42:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:03.471 [2024-07-13 16:42:34.778729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:03.471 [2024-07-13 16:42:34.778864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.471 [2024-07-13 16:42:34.778912] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:25:03.471 [2024-07-13 16:42:34.778957] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.471 [2024-07-13 16:42:34.781774] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.471 [2024-07-13 16:42:34.781835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:03.471 BaseBdev1 00:25:03.471 16:42:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:03.471 16:42:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:03.471 16:42:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:03.728 BaseBdev2_malloc 00:25:03.728 16:42:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:03.728 [2024-07-13 16:42:35.146323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:03.728 [2024-07-13 16:42:35.146451] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.728 [2024-07-13 16:42:35.146497] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:25:03.728 [2024-07-13 16:42:35.146546] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.728 [2024-07-13 16:42:35.149271] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.728 [2024-07-13 16:42:35.149325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:03.728 BaseBdev2 00:25:03.728 16:42:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:03.728 16:42:35 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:03.728 16:42:35 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:03.986 BaseBdev3_malloc 00:25:03.986 16:42:35 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:04.244 [2024-07-13 16:42:35.533201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:04.244 [2024-07-13 16:42:35.533313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.244 [2024-07-13 16:42:35.533362] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:04.244 [2024-07-13 16:42:35.533411] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.244 [2024-07-13 16:42:35.536067] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.244 [2024-07-13 16:42:35.536120] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:04.244 BaseBdev3 00:25:04.244 16:42:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:04.244 16:42:35 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:04.244 16:42:35 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:04.501 BaseBdev4_malloc 00:25:04.501 16:42:35 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:04.501 [2024-07-13 16:42:35.968372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:04.501 [2024-07-13 16:42:35.968495] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.501 [2024-07-13 16:42:35.968535] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:04.501 [2024-07-13 16:42:35.968580] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.501 [2024-07-13 16:42:35.971184] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.501 [2024-07-13 16:42:35.971254] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:04.758 BaseBdev4 00:25:04.758 16:42:35 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:04.758 spare_malloc 00:25:04.758 16:42:36 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:05.016 spare_delay 00:25:05.016 16:42:36 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:05.275 [2024-07-13 16:42:36.539520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:05.275 [2024-07-13 16:42:36.539635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.275 [2024-07-13 16:42:36.539678] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:05.275 [2024-07-13 16:42:36.539722] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.275 [2024-07-13 16:42:36.542525] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.275 [2024-07-13 16:42:36.542579] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:05.275 spare 00:25:05.275 16:42:36 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:05.534 [2024-07-13 16:42:36.791665] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:05.534 [2024-07-13 16:42:36.794120] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:05.534 [2024-07-13 16:42:36.794188] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:05.534 [2024-07-13 16:42:36.794232] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:05.534 [2024-07-13 16:42:36.794457] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:05.534 [2024-07-13 16:42:36.794483] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:05.534 [2024-07-13 16:42:36.794658] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:25:05.534 [2024-07-13 16:42:36.795484] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:05.534 [2024-07-13 16:42:36.795506] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:25:05.534 [2024-07-13 16:42:36.795731] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:05.534 16:42:36 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:05.534 16:42:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:05.534 16:42:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:05.534 16:42:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:05.534 16:42:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:05.534 16:42:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:05.534 16:42:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:05.534 16:42:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:05.534 16:42:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:05.534 16:42:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:05.534 16:42:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.534 16:42:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.793 16:42:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:05.793 "name": "raid_bdev1", 00:25:05.793 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:05.793 "strip_size_kb": 64, 00:25:05.793 "state": "online", 00:25:05.793 "raid_level": "raid5f", 00:25:05.793 "superblock": true, 00:25:05.793 "num_base_bdevs": 4, 00:25:05.793 "num_base_bdevs_discovered": 4, 00:25:05.793 "num_base_bdevs_operational": 4, 00:25:05.793 "base_bdevs_list": [ 00:25:05.793 { 00:25:05.793 "name": "BaseBdev1", 00:25:05.793 "uuid": "785bb952-d331-566e-9154-3f8a0faad34a", 00:25:05.793 "is_configured": true, 00:25:05.793 "data_offset": 2048, 00:25:05.793 "data_size": 63488 00:25:05.793 }, 00:25:05.793 { 00:25:05.793 "name": "BaseBdev2", 00:25:05.793 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:05.793 "is_configured": true, 00:25:05.793 "data_offset": 2048, 00:25:05.793 "data_size": 63488 00:25:05.793 }, 00:25:05.793 { 00:25:05.793 "name": "BaseBdev3", 00:25:05.793 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:05.793 "is_configured": true, 00:25:05.793 "data_offset": 2048, 00:25:05.793 "data_size": 63488 00:25:05.793 }, 00:25:05.793 { 00:25:05.793 "name": "BaseBdev4", 00:25:05.793 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:05.793 "is_configured": true, 00:25:05.793 "data_offset": 2048, 00:25:05.793 "data_size": 63488 00:25:05.793 } 00:25:05.793 ] 00:25:05.793 }' 00:25:05.793 16:42:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:05.793 16:42:37 -- common/autotest_common.sh@10 -- # set +x 00:25:06.358 16:42:37 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:06.358 16:42:37 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:06.358 [2024-07-13 16:42:37.772023] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:06.358 16:42:37 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:25:06.358 16:42:37 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.358 16:42:37 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:06.617 16:42:37 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:25:06.617 16:42:37 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:06.617 16:42:37 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:06.617 16:42:37 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:06.617 16:42:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:06.617 16:42:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:06.617 16:42:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:06.617 16:42:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:06.617 16:42:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:06.617 16:42:37 -- bdev/nbd_common.sh@12 -- # local i 00:25:06.617 16:42:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:06.617 16:42:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:06.617 16:42:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:06.877 [2024-07-13 16:42:38.115985] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:25:06.877 /dev/nbd0 00:25:06.877 16:42:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:06.877 16:42:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:06.877 16:42:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:06.877 16:42:38 -- common/autotest_common.sh@857 -- # local i 00:25:06.877 16:42:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:06.877 16:42:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:06.877 16:42:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:06.877 16:42:38 -- common/autotest_common.sh@861 -- # break 00:25:06.877 16:42:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:06.877 16:42:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:06.877 16:42:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:06.877 1+0 records in 00:25:06.877 1+0 records out 00:25:06.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265541 s, 15.4 MB/s 00:25:06.877 16:42:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:06.877 16:42:38 -- common/autotest_common.sh@874 -- # size=4096 00:25:06.877 16:42:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:06.877 16:42:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:06.877 16:42:38 -- common/autotest_common.sh@877 -- # return 0 00:25:06.877 16:42:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:06.877 16:42:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:06.877 16:42:38 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:06.877 16:42:38 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:06.877 16:42:38 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:06.877 16:42:38 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:25:07.466 496+0 records in 00:25:07.466 496+0 records out 00:25:07.466 97517568 bytes (98 MB, 93 MiB) copied, 0.463729 s, 210 MB/s 00:25:07.466 16:42:38 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@51 -- # local i 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:07.466 [2024-07-13 16:42:38.855024] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@41 -- # break 00:25:07.466 16:42:38 -- bdev/nbd_common.sh@45 -- # return 0 00:25:07.466 16:42:38 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:07.734 [2024-07-13 16:42:39.026612] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:07.734 16:42:39 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:07.734 16:42:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:07.734 16:42:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:07.734 16:42:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:07.734 16:42:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:07.734 16:42:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:07.734 16:42:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:07.734 16:42:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:07.734 16:42:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:07.734 16:42:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:07.734 16:42:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.734 16:42:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.993 16:42:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:07.993 "name": "raid_bdev1", 00:25:07.993 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:07.993 "strip_size_kb": 64, 00:25:07.993 "state": "online", 00:25:07.993 "raid_level": "raid5f", 00:25:07.993 "superblock": true, 00:25:07.993 "num_base_bdevs": 4, 00:25:07.993 "num_base_bdevs_discovered": 3, 00:25:07.993 "num_base_bdevs_operational": 3, 00:25:07.993 "base_bdevs_list": [ 00:25:07.993 { 00:25:07.993 "name": null, 00:25:07.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.993 "is_configured": false, 00:25:07.993 "data_offset": 2048, 00:25:07.993 "data_size": 63488 00:25:07.993 }, 00:25:07.993 { 00:25:07.993 "name": "BaseBdev2", 00:25:07.993 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:07.993 "is_configured": true, 00:25:07.993 "data_offset": 2048, 00:25:07.993 "data_size": 63488 00:25:07.993 }, 00:25:07.993 { 00:25:07.993 "name": "BaseBdev3", 00:25:07.993 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:07.993 "is_configured": true, 00:25:07.993 "data_offset": 2048, 00:25:07.993 "data_size": 63488 00:25:07.993 }, 00:25:07.993 { 00:25:07.993 "name": "BaseBdev4", 00:25:07.993 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:07.993 "is_configured": true, 00:25:07.993 "data_offset": 2048, 00:25:07.993 "data_size": 63488 00:25:07.993 } 00:25:07.993 ] 00:25:07.993 }' 00:25:07.993 16:42:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:07.993 16:42:39 -- common/autotest_common.sh@10 -- # set +x 00:25:08.562 16:42:39 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:08.821 [2024-07-13 16:42:40.050841] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:08.821 [2024-07-13 16:42:40.051059] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:08.821 [2024-07-13 16:42:40.057125] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:25:08.821 16:42:40 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:08.821 [2024-07-13 16:42:40.074624] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:09.757 16:42:41 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:09.757 16:42:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:09.757 16:42:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:09.757 16:42:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:09.757 16:42:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:09.757 16:42:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.757 16:42:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.016 16:42:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:10.016 "name": "raid_bdev1", 00:25:10.016 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:10.016 "strip_size_kb": 64, 00:25:10.016 "state": "online", 00:25:10.016 "raid_level": "raid5f", 00:25:10.016 "superblock": true, 00:25:10.016 "num_base_bdevs": 4, 00:25:10.016 "num_base_bdevs_discovered": 4, 00:25:10.016 "num_base_bdevs_operational": 4, 00:25:10.016 "process": { 00:25:10.016 "type": "rebuild", 00:25:10.016 "target": "spare", 00:25:10.016 "progress": { 00:25:10.016 "blocks": 23040, 00:25:10.016 "percent": 12 00:25:10.016 } 00:25:10.016 }, 00:25:10.016 "base_bdevs_list": [ 00:25:10.016 { 00:25:10.016 "name": "spare", 00:25:10.016 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:10.016 "is_configured": true, 00:25:10.016 "data_offset": 2048, 00:25:10.016 "data_size": 63488 00:25:10.016 }, 00:25:10.016 { 00:25:10.016 "name": "BaseBdev2", 00:25:10.016 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:10.016 "is_configured": true, 00:25:10.016 "data_offset": 2048, 00:25:10.016 "data_size": 63488 00:25:10.016 }, 00:25:10.016 { 00:25:10.016 "name": "BaseBdev3", 00:25:10.016 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:10.016 "is_configured": true, 00:25:10.016 "data_offset": 2048, 00:25:10.016 "data_size": 63488 00:25:10.016 }, 00:25:10.016 { 00:25:10.016 "name": "BaseBdev4", 00:25:10.016 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:10.016 "is_configured": true, 00:25:10.016 "data_offset": 2048, 00:25:10.016 "data_size": 63488 00:25:10.016 } 00:25:10.016 ] 00:25:10.016 }' 00:25:10.016 16:42:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:10.016 16:42:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:10.016 16:42:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:10.016 16:42:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:10.016 16:42:41 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:10.275 [2024-07-13 16:42:41.647827] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:10.275 [2024-07-13 16:42:41.686825] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:10.275 [2024-07-13 16:42:41.687037] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.275 16:42:41 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:10.275 16:42:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:10.275 16:42:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:10.275 16:42:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:10.275 16:42:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:10.275 16:42:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:10.275 16:42:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:10.275 16:42:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:10.275 16:42:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:10.275 16:42:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:10.275 16:42:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.275 16:42:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.533 16:42:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:10.533 "name": "raid_bdev1", 00:25:10.533 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:10.533 "strip_size_kb": 64, 00:25:10.533 "state": "online", 00:25:10.533 "raid_level": "raid5f", 00:25:10.533 "superblock": true, 00:25:10.533 "num_base_bdevs": 4, 00:25:10.533 "num_base_bdevs_discovered": 3, 00:25:10.533 "num_base_bdevs_operational": 3, 00:25:10.533 "base_bdevs_list": [ 00:25:10.533 { 00:25:10.533 "name": null, 00:25:10.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.533 "is_configured": false, 00:25:10.533 "data_offset": 2048, 00:25:10.533 "data_size": 63488 00:25:10.533 }, 00:25:10.533 { 00:25:10.533 "name": "BaseBdev2", 00:25:10.533 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:10.533 "is_configured": true, 00:25:10.533 "data_offset": 2048, 00:25:10.533 "data_size": 63488 00:25:10.533 }, 00:25:10.533 { 00:25:10.533 "name": "BaseBdev3", 00:25:10.533 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:10.533 "is_configured": true, 00:25:10.533 "data_offset": 2048, 00:25:10.533 "data_size": 63488 00:25:10.533 }, 00:25:10.533 { 00:25:10.533 "name": "BaseBdev4", 00:25:10.533 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:10.533 "is_configured": true, 00:25:10.533 "data_offset": 2048, 00:25:10.533 "data_size": 63488 00:25:10.533 } 00:25:10.533 ] 00:25:10.533 }' 00:25:10.533 16:42:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:10.533 16:42:41 -- common/autotest_common.sh@10 -- # set +x 00:25:11.098 16:42:42 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:11.098 16:42:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:11.098 16:42:42 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:11.098 16:42:42 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:11.098 16:42:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:11.098 16:42:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.098 16:42:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.356 16:42:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:11.356 "name": "raid_bdev1", 00:25:11.356 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:11.356 "strip_size_kb": 64, 00:25:11.356 "state": "online", 00:25:11.356 "raid_level": "raid5f", 00:25:11.356 "superblock": true, 00:25:11.356 "num_base_bdevs": 4, 00:25:11.356 "num_base_bdevs_discovered": 3, 00:25:11.356 "num_base_bdevs_operational": 3, 00:25:11.356 "base_bdevs_list": [ 00:25:11.356 { 00:25:11.356 "name": null, 00:25:11.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.356 "is_configured": false, 00:25:11.356 "data_offset": 2048, 00:25:11.356 "data_size": 63488 00:25:11.356 }, 00:25:11.356 { 00:25:11.356 "name": "BaseBdev2", 00:25:11.356 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:11.356 "is_configured": true, 00:25:11.356 "data_offset": 2048, 00:25:11.356 "data_size": 63488 00:25:11.356 }, 00:25:11.356 { 00:25:11.356 "name": "BaseBdev3", 00:25:11.356 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:11.356 "is_configured": true, 00:25:11.356 "data_offset": 2048, 00:25:11.356 "data_size": 63488 00:25:11.356 }, 00:25:11.356 { 00:25:11.356 "name": "BaseBdev4", 00:25:11.356 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:11.356 "is_configured": true, 00:25:11.356 "data_offset": 2048, 00:25:11.356 "data_size": 63488 00:25:11.356 } 00:25:11.356 ] 00:25:11.356 }' 00:25:11.356 16:42:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:11.614 16:42:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:11.614 16:42:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:11.614 16:42:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:11.614 16:42:42 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:11.872 [2024-07-13 16:42:43.128738] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:11.872 [2024-07-13 16:42:43.128961] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:11.872 [2024-07-13 16:42:43.134900] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027240 00:25:11.872 [2024-07-13 16:42:43.137850] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:11.872 16:42:43 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:12.808 16:42:44 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:12.808 16:42:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:12.808 16:42:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:12.808 16:42:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:12.808 16:42:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:12.808 16:42:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.808 16:42:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.089 16:42:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:13.089 "name": "raid_bdev1", 00:25:13.089 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:13.089 "strip_size_kb": 64, 00:25:13.089 "state": "online", 00:25:13.089 "raid_level": "raid5f", 00:25:13.089 "superblock": true, 00:25:13.089 "num_base_bdevs": 4, 00:25:13.089 "num_base_bdevs_discovered": 4, 00:25:13.089 "num_base_bdevs_operational": 4, 00:25:13.089 "process": { 00:25:13.089 "type": "rebuild", 00:25:13.089 "target": "spare", 00:25:13.089 "progress": { 00:25:13.089 "blocks": 23040, 00:25:13.089 "percent": 12 00:25:13.089 } 00:25:13.089 }, 00:25:13.089 "base_bdevs_list": [ 00:25:13.089 { 00:25:13.089 "name": "spare", 00:25:13.089 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:13.089 "is_configured": true, 00:25:13.089 "data_offset": 2048, 00:25:13.089 "data_size": 63488 00:25:13.089 }, 00:25:13.089 { 00:25:13.089 "name": "BaseBdev2", 00:25:13.089 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:13.089 "is_configured": true, 00:25:13.089 "data_offset": 2048, 00:25:13.089 "data_size": 63488 00:25:13.089 }, 00:25:13.089 { 00:25:13.089 "name": "BaseBdev3", 00:25:13.089 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:13.089 "is_configured": true, 00:25:13.089 "data_offset": 2048, 00:25:13.089 "data_size": 63488 00:25:13.089 }, 00:25:13.089 { 00:25:13.089 "name": "BaseBdev4", 00:25:13.090 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:13.090 "is_configured": true, 00:25:13.090 "data_offset": 2048, 00:25:13.090 "data_size": 63488 00:25:13.090 } 00:25:13.090 ] 00:25:13.090 }' 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:13.090 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@657 -- # local timeout=700 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.090 16:42:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.355 16:42:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:13.355 "name": "raid_bdev1", 00:25:13.355 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:13.355 "strip_size_kb": 64, 00:25:13.355 "state": "online", 00:25:13.355 "raid_level": "raid5f", 00:25:13.355 "superblock": true, 00:25:13.355 "num_base_bdevs": 4, 00:25:13.355 "num_base_bdevs_discovered": 4, 00:25:13.355 "num_base_bdevs_operational": 4, 00:25:13.355 "process": { 00:25:13.355 "type": "rebuild", 00:25:13.355 "target": "spare", 00:25:13.355 "progress": { 00:25:13.355 "blocks": 28800, 00:25:13.355 "percent": 15 00:25:13.355 } 00:25:13.355 }, 00:25:13.355 "base_bdevs_list": [ 00:25:13.355 { 00:25:13.355 "name": "spare", 00:25:13.355 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:13.355 "is_configured": true, 00:25:13.355 "data_offset": 2048, 00:25:13.355 "data_size": 63488 00:25:13.355 }, 00:25:13.355 { 00:25:13.355 "name": "BaseBdev2", 00:25:13.355 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:13.355 "is_configured": true, 00:25:13.355 "data_offset": 2048, 00:25:13.355 "data_size": 63488 00:25:13.355 }, 00:25:13.355 { 00:25:13.355 "name": "BaseBdev3", 00:25:13.355 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:13.355 "is_configured": true, 00:25:13.355 "data_offset": 2048, 00:25:13.355 "data_size": 63488 00:25:13.355 }, 00:25:13.355 { 00:25:13.355 "name": "BaseBdev4", 00:25:13.355 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:13.355 "is_configured": true, 00:25:13.355 "data_offset": 2048, 00:25:13.355 "data_size": 63488 00:25:13.355 } 00:25:13.355 ] 00:25:13.355 }' 00:25:13.355 16:42:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:13.355 16:42:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:13.356 16:42:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:13.629 16:42:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.629 16:42:44 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:14.567 16:42:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:14.567 16:42:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:14.567 16:42:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:14.567 16:42:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:14.567 16:42:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:14.567 16:42:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:14.567 16:42:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.567 16:42:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.825 16:42:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:14.825 "name": "raid_bdev1", 00:25:14.825 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:14.825 "strip_size_kb": 64, 00:25:14.825 "state": "online", 00:25:14.825 "raid_level": "raid5f", 00:25:14.825 "superblock": true, 00:25:14.825 "num_base_bdevs": 4, 00:25:14.825 "num_base_bdevs_discovered": 4, 00:25:14.825 "num_base_bdevs_operational": 4, 00:25:14.825 "process": { 00:25:14.825 "type": "rebuild", 00:25:14.825 "target": "spare", 00:25:14.825 "progress": { 00:25:14.825 "blocks": 53760, 00:25:14.825 "percent": 28 00:25:14.825 } 00:25:14.825 }, 00:25:14.825 "base_bdevs_list": [ 00:25:14.825 { 00:25:14.825 "name": "spare", 00:25:14.825 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:14.825 "is_configured": true, 00:25:14.825 "data_offset": 2048, 00:25:14.825 "data_size": 63488 00:25:14.825 }, 00:25:14.825 { 00:25:14.825 "name": "BaseBdev2", 00:25:14.825 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:14.825 "is_configured": true, 00:25:14.825 "data_offset": 2048, 00:25:14.825 "data_size": 63488 00:25:14.825 }, 00:25:14.825 { 00:25:14.825 "name": "BaseBdev3", 00:25:14.825 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:14.825 "is_configured": true, 00:25:14.825 "data_offset": 2048, 00:25:14.825 "data_size": 63488 00:25:14.825 }, 00:25:14.825 { 00:25:14.825 "name": "BaseBdev4", 00:25:14.825 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:14.825 "is_configured": true, 00:25:14.825 "data_offset": 2048, 00:25:14.825 "data_size": 63488 00:25:14.825 } 00:25:14.825 ] 00:25:14.825 }' 00:25:14.825 16:42:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:14.825 16:42:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:14.825 16:42:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:14.825 16:42:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:14.825 16:42:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:15.759 16:42:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:15.759 16:42:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.759 16:42:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:15.759 16:42:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:15.759 16:42:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:15.759 16:42:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:15.759 16:42:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.759 16:42:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.017 16:42:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:16.017 "name": "raid_bdev1", 00:25:16.017 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:16.017 "strip_size_kb": 64, 00:25:16.017 "state": "online", 00:25:16.017 "raid_level": "raid5f", 00:25:16.017 "superblock": true, 00:25:16.017 "num_base_bdevs": 4, 00:25:16.017 "num_base_bdevs_discovered": 4, 00:25:16.017 "num_base_bdevs_operational": 4, 00:25:16.017 "process": { 00:25:16.017 "type": "rebuild", 00:25:16.017 "target": "spare", 00:25:16.017 "progress": { 00:25:16.017 "blocks": 80640, 00:25:16.017 "percent": 42 00:25:16.017 } 00:25:16.017 }, 00:25:16.017 "base_bdevs_list": [ 00:25:16.017 { 00:25:16.017 "name": "spare", 00:25:16.017 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:16.017 "is_configured": true, 00:25:16.017 "data_offset": 2048, 00:25:16.017 "data_size": 63488 00:25:16.017 }, 00:25:16.017 { 00:25:16.017 "name": "BaseBdev2", 00:25:16.017 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:16.017 "is_configured": true, 00:25:16.017 "data_offset": 2048, 00:25:16.017 "data_size": 63488 00:25:16.017 }, 00:25:16.017 { 00:25:16.017 "name": "BaseBdev3", 00:25:16.017 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:16.017 "is_configured": true, 00:25:16.017 "data_offset": 2048, 00:25:16.017 "data_size": 63488 00:25:16.017 }, 00:25:16.017 { 00:25:16.017 "name": "BaseBdev4", 00:25:16.017 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:16.017 "is_configured": true, 00:25:16.017 "data_offset": 2048, 00:25:16.017 "data_size": 63488 00:25:16.017 } 00:25:16.017 ] 00:25:16.017 }' 00:25:16.017 16:42:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:16.017 16:42:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:16.017 16:42:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:16.274 16:42:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:16.274 16:42:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:17.207 16:42:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:17.207 16:42:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:17.207 16:42:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:17.207 16:42:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:17.207 16:42:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:17.207 16:42:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:17.207 16:42:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.207 16:42:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.465 16:42:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:17.465 "name": "raid_bdev1", 00:25:17.465 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:17.465 "strip_size_kb": 64, 00:25:17.465 "state": "online", 00:25:17.465 "raid_level": "raid5f", 00:25:17.465 "superblock": true, 00:25:17.465 "num_base_bdevs": 4, 00:25:17.465 "num_base_bdevs_discovered": 4, 00:25:17.465 "num_base_bdevs_operational": 4, 00:25:17.465 "process": { 00:25:17.465 "type": "rebuild", 00:25:17.465 "target": "spare", 00:25:17.465 "progress": { 00:25:17.465 "blocks": 105600, 00:25:17.465 "percent": 55 00:25:17.465 } 00:25:17.465 }, 00:25:17.465 "base_bdevs_list": [ 00:25:17.465 { 00:25:17.465 "name": "spare", 00:25:17.465 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:17.465 "is_configured": true, 00:25:17.465 "data_offset": 2048, 00:25:17.465 "data_size": 63488 00:25:17.465 }, 00:25:17.465 { 00:25:17.465 "name": "BaseBdev2", 00:25:17.465 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:17.465 "is_configured": true, 00:25:17.465 "data_offset": 2048, 00:25:17.465 "data_size": 63488 00:25:17.465 }, 00:25:17.465 { 00:25:17.465 "name": "BaseBdev3", 00:25:17.465 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:17.465 "is_configured": true, 00:25:17.465 "data_offset": 2048, 00:25:17.465 "data_size": 63488 00:25:17.465 }, 00:25:17.465 { 00:25:17.465 "name": "BaseBdev4", 00:25:17.465 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:17.465 "is_configured": true, 00:25:17.465 "data_offset": 2048, 00:25:17.465 "data_size": 63488 00:25:17.465 } 00:25:17.465 ] 00:25:17.465 }' 00:25:17.465 16:42:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:17.465 16:42:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:17.465 16:42:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:17.465 16:42:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:17.465 16:42:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:18.399 16:42:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:18.399 16:42:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:18.399 16:42:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:18.399 16:42:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:18.399 16:42:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:18.399 16:42:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:18.399 16:42:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.399 16:42:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.657 16:42:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:18.657 "name": "raid_bdev1", 00:25:18.657 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:18.657 "strip_size_kb": 64, 00:25:18.657 "state": "online", 00:25:18.657 "raid_level": "raid5f", 00:25:18.657 "superblock": true, 00:25:18.657 "num_base_bdevs": 4, 00:25:18.657 "num_base_bdevs_discovered": 4, 00:25:18.657 "num_base_bdevs_operational": 4, 00:25:18.657 "process": { 00:25:18.657 "type": "rebuild", 00:25:18.657 "target": "spare", 00:25:18.657 "progress": { 00:25:18.657 "blocks": 130560, 00:25:18.657 "percent": 68 00:25:18.657 } 00:25:18.657 }, 00:25:18.657 "base_bdevs_list": [ 00:25:18.657 { 00:25:18.657 "name": "spare", 00:25:18.657 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:18.657 "is_configured": true, 00:25:18.657 "data_offset": 2048, 00:25:18.657 "data_size": 63488 00:25:18.657 }, 00:25:18.657 { 00:25:18.657 "name": "BaseBdev2", 00:25:18.657 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:18.657 "is_configured": true, 00:25:18.657 "data_offset": 2048, 00:25:18.657 "data_size": 63488 00:25:18.657 }, 00:25:18.657 { 00:25:18.657 "name": "BaseBdev3", 00:25:18.657 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:18.657 "is_configured": true, 00:25:18.657 "data_offset": 2048, 00:25:18.657 "data_size": 63488 00:25:18.657 }, 00:25:18.657 { 00:25:18.657 "name": "BaseBdev4", 00:25:18.657 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:18.657 "is_configured": true, 00:25:18.657 "data_offset": 2048, 00:25:18.657 "data_size": 63488 00:25:18.657 } 00:25:18.657 ] 00:25:18.657 }' 00:25:18.657 16:42:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:18.657 16:42:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:18.657 16:42:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:18.657 16:42:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:18.657 16:42:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:20.032 "name": "raid_bdev1", 00:25:20.032 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:20.032 "strip_size_kb": 64, 00:25:20.032 "state": "online", 00:25:20.032 "raid_level": "raid5f", 00:25:20.032 "superblock": true, 00:25:20.032 "num_base_bdevs": 4, 00:25:20.032 "num_base_bdevs_discovered": 4, 00:25:20.032 "num_base_bdevs_operational": 4, 00:25:20.032 "process": { 00:25:20.032 "type": "rebuild", 00:25:20.032 "target": "spare", 00:25:20.032 "progress": { 00:25:20.032 "blocks": 155520, 00:25:20.032 "percent": 81 00:25:20.032 } 00:25:20.032 }, 00:25:20.032 "base_bdevs_list": [ 00:25:20.032 { 00:25:20.032 "name": "spare", 00:25:20.032 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:20.032 "is_configured": true, 00:25:20.032 "data_offset": 2048, 00:25:20.032 "data_size": 63488 00:25:20.032 }, 00:25:20.032 { 00:25:20.032 "name": "BaseBdev2", 00:25:20.032 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:20.032 "is_configured": true, 00:25:20.032 "data_offset": 2048, 00:25:20.032 "data_size": 63488 00:25:20.032 }, 00:25:20.032 { 00:25:20.032 "name": "BaseBdev3", 00:25:20.032 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:20.032 "is_configured": true, 00:25:20.032 "data_offset": 2048, 00:25:20.032 "data_size": 63488 00:25:20.032 }, 00:25:20.032 { 00:25:20.032 "name": "BaseBdev4", 00:25:20.032 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:20.032 "is_configured": true, 00:25:20.032 "data_offset": 2048, 00:25:20.032 "data_size": 63488 00:25:20.032 } 00:25:20.032 ] 00:25:20.032 }' 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:20.032 16:42:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:21.409 16:42:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:21.410 16:42:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:21.410 16:42:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:21.410 16:42:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:21.410 16:42:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:21.410 16:42:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:21.410 16:42:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.410 16:42:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.410 16:42:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:21.410 "name": "raid_bdev1", 00:25:21.410 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:21.410 "strip_size_kb": 64, 00:25:21.410 "state": "online", 00:25:21.410 "raid_level": "raid5f", 00:25:21.410 "superblock": true, 00:25:21.410 "num_base_bdevs": 4, 00:25:21.410 "num_base_bdevs_discovered": 4, 00:25:21.410 "num_base_bdevs_operational": 4, 00:25:21.410 "process": { 00:25:21.410 "type": "rebuild", 00:25:21.410 "target": "spare", 00:25:21.410 "progress": { 00:25:21.410 "blocks": 180480, 00:25:21.410 "percent": 94 00:25:21.410 } 00:25:21.410 }, 00:25:21.410 "base_bdevs_list": [ 00:25:21.410 { 00:25:21.410 "name": "spare", 00:25:21.410 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:21.410 "is_configured": true, 00:25:21.410 "data_offset": 2048, 00:25:21.410 "data_size": 63488 00:25:21.410 }, 00:25:21.410 { 00:25:21.410 "name": "BaseBdev2", 00:25:21.410 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:21.410 "is_configured": true, 00:25:21.410 "data_offset": 2048, 00:25:21.410 "data_size": 63488 00:25:21.410 }, 00:25:21.410 { 00:25:21.410 "name": "BaseBdev3", 00:25:21.410 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:21.410 "is_configured": true, 00:25:21.410 "data_offset": 2048, 00:25:21.410 "data_size": 63488 00:25:21.410 }, 00:25:21.410 { 00:25:21.410 "name": "BaseBdev4", 00:25:21.410 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:21.410 "is_configured": true, 00:25:21.410 "data_offset": 2048, 00:25:21.410 "data_size": 63488 00:25:21.410 } 00:25:21.410 ] 00:25:21.410 }' 00:25:21.410 16:42:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:21.410 16:42:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:21.410 16:42:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:21.410 16:42:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:21.410 16:42:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:21.992 [2024-07-13 16:42:53.208033] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:21.992 [2024-07-13 16:42:53.208243] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:21.992 [2024-07-13 16:42:53.208518] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:22.558 16:42:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:22.558 16:42:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:22.558 16:42:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:22.558 16:42:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:22.558 16:42:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:22.558 16:42:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:22.558 16:42:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.558 16:42:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.558 16:42:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:22.558 "name": "raid_bdev1", 00:25:22.558 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:22.558 "strip_size_kb": 64, 00:25:22.558 "state": "online", 00:25:22.558 "raid_level": "raid5f", 00:25:22.558 "superblock": true, 00:25:22.558 "num_base_bdevs": 4, 00:25:22.558 "num_base_bdevs_discovered": 4, 00:25:22.558 "num_base_bdevs_operational": 4, 00:25:22.558 "base_bdevs_list": [ 00:25:22.558 { 00:25:22.558 "name": "spare", 00:25:22.558 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:22.558 "is_configured": true, 00:25:22.558 "data_offset": 2048, 00:25:22.558 "data_size": 63488 00:25:22.558 }, 00:25:22.558 { 00:25:22.558 "name": "BaseBdev2", 00:25:22.558 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:22.558 "is_configured": true, 00:25:22.558 "data_offset": 2048, 00:25:22.558 "data_size": 63488 00:25:22.558 }, 00:25:22.558 { 00:25:22.558 "name": "BaseBdev3", 00:25:22.558 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:22.558 "is_configured": true, 00:25:22.558 "data_offset": 2048, 00:25:22.558 "data_size": 63488 00:25:22.558 }, 00:25:22.558 { 00:25:22.558 "name": "BaseBdev4", 00:25:22.558 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:22.558 "is_configured": true, 00:25:22.558 "data_offset": 2048, 00:25:22.558 "data_size": 63488 00:25:22.558 } 00:25:22.558 ] 00:25:22.558 }' 00:25:22.558 16:42:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:22.817 16:42:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:22.817 16:42:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:22.817 16:42:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:22.817 16:42:54 -- bdev/bdev_raid.sh@660 -- # break 00:25:22.817 16:42:54 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:22.817 16:42:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:22.817 16:42:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:22.817 16:42:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:22.817 16:42:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:22.817 16:42:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.817 16:42:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:23.075 "name": "raid_bdev1", 00:25:23.075 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:23.075 "strip_size_kb": 64, 00:25:23.075 "state": "online", 00:25:23.075 "raid_level": "raid5f", 00:25:23.075 "superblock": true, 00:25:23.075 "num_base_bdevs": 4, 00:25:23.075 "num_base_bdevs_discovered": 4, 00:25:23.075 "num_base_bdevs_operational": 4, 00:25:23.075 "base_bdevs_list": [ 00:25:23.075 { 00:25:23.075 "name": "spare", 00:25:23.075 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:23.075 "is_configured": true, 00:25:23.075 "data_offset": 2048, 00:25:23.075 "data_size": 63488 00:25:23.075 }, 00:25:23.075 { 00:25:23.075 "name": "BaseBdev2", 00:25:23.075 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:23.075 "is_configured": true, 00:25:23.075 "data_offset": 2048, 00:25:23.075 "data_size": 63488 00:25:23.075 }, 00:25:23.075 { 00:25:23.075 "name": "BaseBdev3", 00:25:23.075 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:23.075 "is_configured": true, 00:25:23.075 "data_offset": 2048, 00:25:23.075 "data_size": 63488 00:25:23.075 }, 00:25:23.075 { 00:25:23.075 "name": "BaseBdev4", 00:25:23.075 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:23.075 "is_configured": true, 00:25:23.075 "data_offset": 2048, 00:25:23.075 "data_size": 63488 00:25:23.075 } 00:25:23.075 ] 00:25:23.075 }' 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.075 16:42:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.334 16:42:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:23.334 "name": "raid_bdev1", 00:25:23.334 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:23.334 "strip_size_kb": 64, 00:25:23.334 "state": "online", 00:25:23.334 "raid_level": "raid5f", 00:25:23.334 "superblock": true, 00:25:23.334 "num_base_bdevs": 4, 00:25:23.334 "num_base_bdevs_discovered": 4, 00:25:23.334 "num_base_bdevs_operational": 4, 00:25:23.334 "base_bdevs_list": [ 00:25:23.334 { 00:25:23.334 "name": "spare", 00:25:23.334 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:23.334 "is_configured": true, 00:25:23.334 "data_offset": 2048, 00:25:23.334 "data_size": 63488 00:25:23.334 }, 00:25:23.334 { 00:25:23.334 "name": "BaseBdev2", 00:25:23.334 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:23.334 "is_configured": true, 00:25:23.334 "data_offset": 2048, 00:25:23.334 "data_size": 63488 00:25:23.334 }, 00:25:23.334 { 00:25:23.334 "name": "BaseBdev3", 00:25:23.334 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:23.334 "is_configured": true, 00:25:23.334 "data_offset": 2048, 00:25:23.334 "data_size": 63488 00:25:23.334 }, 00:25:23.334 { 00:25:23.334 "name": "BaseBdev4", 00:25:23.334 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:23.334 "is_configured": true, 00:25:23.334 "data_offset": 2048, 00:25:23.334 "data_size": 63488 00:25:23.334 } 00:25:23.334 ] 00:25:23.334 }' 00:25:23.334 16:42:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:23.334 16:42:54 -- common/autotest_common.sh@10 -- # set +x 00:25:23.901 16:42:55 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:23.901 [2024-07-13 16:42:55.313638] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:23.901 [2024-07-13 16:42:55.313876] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:23.901 [2024-07-13 16:42:55.314116] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:23.901 [2024-07-13 16:42:55.314329] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:23.901 [2024-07-13 16:42:55.314441] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:25:23.901 16:42:55 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:23.901 16:42:55 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.159 16:42:55 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:24.159 16:42:55 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:24.159 16:42:55 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:24.159 16:42:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:24.159 16:42:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:24.159 16:42:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:24.159 16:42:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:24.159 16:42:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:24.159 16:42:55 -- bdev/nbd_common.sh@12 -- # local i 00:25:24.159 16:42:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:24.159 16:42:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:24.159 16:42:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:24.418 /dev/nbd0 00:25:24.418 16:42:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:24.418 16:42:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:24.418 16:42:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:24.418 16:42:55 -- common/autotest_common.sh@857 -- # local i 00:25:24.418 16:42:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:24.418 16:42:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:24.418 16:42:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:24.418 16:42:55 -- common/autotest_common.sh@861 -- # break 00:25:24.418 16:42:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:24.418 16:42:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:24.418 16:42:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:24.418 1+0 records in 00:25:24.418 1+0 records out 00:25:24.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581008 s, 7.0 MB/s 00:25:24.418 16:42:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:24.418 16:42:55 -- common/autotest_common.sh@874 -- # size=4096 00:25:24.418 16:42:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:24.418 16:42:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:24.418 16:42:55 -- common/autotest_common.sh@877 -- # return 0 00:25:24.418 16:42:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:24.418 16:42:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:24.418 16:42:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:24.677 /dev/nbd1 00:25:24.677 16:42:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:24.936 16:42:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:24.936 16:42:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:24.936 16:42:56 -- common/autotest_common.sh@857 -- # local i 00:25:24.936 16:42:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:24.936 16:42:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:24.936 16:42:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:24.936 16:42:56 -- common/autotest_common.sh@861 -- # break 00:25:24.936 16:42:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:24.936 16:42:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:24.936 16:42:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:24.936 1+0 records in 00:25:24.936 1+0 records out 00:25:24.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531083 s, 7.7 MB/s 00:25:24.936 16:42:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:24.936 16:42:56 -- common/autotest_common.sh@874 -- # size=4096 00:25:24.936 16:42:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:24.936 16:42:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:24.936 16:42:56 -- common/autotest_common.sh@877 -- # return 0 00:25:24.936 16:42:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:24.936 16:42:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:24.936 16:42:56 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:24.936 16:42:56 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:24.936 16:42:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:24.936 16:42:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:24.936 16:42:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:24.936 16:42:56 -- bdev/nbd_common.sh@51 -- # local i 00:25:24.936 16:42:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:24.936 16:42:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:25.194 16:42:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:25.194 16:42:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:25.194 16:42:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:25.194 16:42:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:25.194 16:42:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:25.194 16:42:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:25.194 16:42:56 -- bdev/nbd_common.sh@41 -- # break 00:25:25.194 16:42:56 -- bdev/nbd_common.sh@45 -- # return 0 00:25:25.194 16:42:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:25.195 16:42:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:25.195 16:42:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:25.195 16:42:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:25.195 16:42:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:25.195 16:42:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:25.195 16:42:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:25.195 16:42:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:25.195 16:42:56 -- bdev/nbd_common.sh@41 -- # break 00:25:25.195 16:42:56 -- bdev/nbd_common.sh@45 -- # return 0 00:25:25.195 16:42:56 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:25:25.195 16:42:56 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:25.195 16:42:56 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:25:25.195 16:42:56 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:25.477 16:42:56 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:25.736 [2024-07-13 16:42:57.060746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:25.736 [2024-07-13 16:42:57.061016] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.736 [2024-07-13 16:42:57.061105] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:25.736 [2024-07-13 16:42:57.061370] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.736 [2024-07-13 16:42:57.064129] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.736 [2024-07-13 16:42:57.064339] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:25.736 [2024-07-13 16:42:57.064532] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:25.736 [2024-07-13 16:42:57.064672] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:25.736 BaseBdev1 00:25:25.736 16:42:57 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:25.736 16:42:57 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:25:25.736 16:42:57 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:25:25.995 16:42:57 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:26.253 [2024-07-13 16:42:57.496990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:26.253 [2024-07-13 16:42:57.497262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.253 [2024-07-13 16:42:57.497344] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:26.253 [2024-07-13 16:42:57.497442] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.253 [2024-07-13 16:42:57.497936] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.253 [2024-07-13 16:42:57.498097] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:26.253 [2024-07-13 16:42:57.498271] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:25:26.253 [2024-07-13 16:42:57.498354] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:25:26.253 [2024-07-13 16:42:57.498422] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:26.253 [2024-07-13 16:42:57.498484] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring 00:25:26.253 [2024-07-13 16:42:57.498568] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:26.253 BaseBdev2 00:25:26.253 16:42:57 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:26.253 16:42:57 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:25:26.253 16:42:57 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:25:26.253 16:42:57 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:26.512 [2024-07-13 16:42:57.845005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:26.512 [2024-07-13 16:42:57.845287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.512 [2024-07-13 16:42:57.845380] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:26.512 [2024-07-13 16:42:57.845497] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.512 [2024-07-13 16:42:57.846039] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.512 [2024-07-13 16:42:57.846193] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:26.512 [2024-07-13 16:42:57.846347] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:25:26.512 [2024-07-13 16:42:57.846435] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:26.512 BaseBdev3 00:25:26.512 16:42:57 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:26.512 16:42:57 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:25:26.512 16:42:57 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:25:26.771 16:42:58 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:26.771 [2024-07-13 16:42:58.181102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:26.771 [2024-07-13 16:42:58.181374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.771 [2024-07-13 16:42:58.181446] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:26.771 [2024-07-13 16:42:58.181568] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.771 [2024-07-13 16:42:58.182056] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.771 [2024-07-13 16:42:58.182204] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:26.771 [2024-07-13 16:42:58.182363] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:25:26.771 [2024-07-13 16:42:58.182464] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:26.771 BaseBdev4 00:25:26.771 16:42:58 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:27.031 16:42:58 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:27.291 [2024-07-13 16:42:58.617148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:27.291 [2024-07-13 16:42:58.617412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.291 [2024-07-13 16:42:58.617485] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:25:27.291 [2024-07-13 16:42:58.617583] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.291 [2024-07-13 16:42:58.618119] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.291 [2024-07-13 16:42:58.618289] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:27.291 [2024-07-13 16:42:58.618479] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:27.291 [2024-07-13 16:42:58.618601] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:27.291 spare 00:25:27.291 16:42:58 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:27.291 16:42:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:27.291 16:42:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:27.291 16:42:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:27.291 16:42:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:27.291 16:42:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:27.291 16:42:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:27.291 16:42:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:27.291 16:42:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:27.291 16:42:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:27.291 16:42:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.291 16:42:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.291 [2024-07-13 16:42:58.718859] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:25:27.291 [2024-07-13 16:42:58.719043] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:27.291 [2024-07-13 16:42:58.719262] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045ea0 00:25:27.291 [2024-07-13 16:42:58.720172] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:25:27.291 [2024-07-13 16:42:58.720297] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:25:27.291 [2024-07-13 16:42:58.720544] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.550 16:42:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:27.550 "name": "raid_bdev1", 00:25:27.550 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:27.550 "strip_size_kb": 64, 00:25:27.550 "state": "online", 00:25:27.550 "raid_level": "raid5f", 00:25:27.550 "superblock": true, 00:25:27.550 "num_base_bdevs": 4, 00:25:27.550 "num_base_bdevs_discovered": 4, 00:25:27.550 "num_base_bdevs_operational": 4, 00:25:27.550 "base_bdevs_list": [ 00:25:27.550 { 00:25:27.550 "name": "spare", 00:25:27.550 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:27.550 "is_configured": true, 00:25:27.550 "data_offset": 2048, 00:25:27.550 "data_size": 63488 00:25:27.550 }, 00:25:27.550 { 00:25:27.550 "name": "BaseBdev2", 00:25:27.550 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:27.550 "is_configured": true, 00:25:27.550 "data_offset": 2048, 00:25:27.550 "data_size": 63488 00:25:27.550 }, 00:25:27.550 { 00:25:27.550 "name": "BaseBdev3", 00:25:27.550 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:27.550 "is_configured": true, 00:25:27.550 "data_offset": 2048, 00:25:27.550 "data_size": 63488 00:25:27.550 }, 00:25:27.550 { 00:25:27.550 "name": "BaseBdev4", 00:25:27.550 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:27.550 "is_configured": true, 00:25:27.550 "data_offset": 2048, 00:25:27.550 "data_size": 63488 00:25:27.550 } 00:25:27.550 ] 00:25:27.550 }' 00:25:27.550 16:42:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:27.550 16:42:58 -- common/autotest_common.sh@10 -- # set +x 00:25:28.117 16:42:59 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:28.117 16:42:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:28.117 16:42:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:28.117 16:42:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:28.117 16:42:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:28.117 16:42:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.117 16:42:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.376 16:42:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:28.376 "name": "raid_bdev1", 00:25:28.376 "uuid": "b9a17952-366d-4d88-ac56-f9c3c3a6734e", 00:25:28.376 "strip_size_kb": 64, 00:25:28.376 "state": "online", 00:25:28.376 "raid_level": "raid5f", 00:25:28.376 "superblock": true, 00:25:28.376 "num_base_bdevs": 4, 00:25:28.376 "num_base_bdevs_discovered": 4, 00:25:28.376 "num_base_bdevs_operational": 4, 00:25:28.376 "base_bdevs_list": [ 00:25:28.376 { 00:25:28.376 "name": "spare", 00:25:28.376 "uuid": "ee287392-2296-5697-8b9d-f7c3482b42c6", 00:25:28.376 "is_configured": true, 00:25:28.376 "data_offset": 2048, 00:25:28.376 "data_size": 63488 00:25:28.376 }, 00:25:28.376 { 00:25:28.376 "name": "BaseBdev2", 00:25:28.376 "uuid": "1dffdfcf-d05d-5d18-8a29-529cfb77c0f7", 00:25:28.376 "is_configured": true, 00:25:28.376 "data_offset": 2048, 00:25:28.376 "data_size": 63488 00:25:28.376 }, 00:25:28.376 { 00:25:28.376 "name": "BaseBdev3", 00:25:28.376 "uuid": "d62eae8c-edf0-5c4d-a8ec-9bc6841e3a25", 00:25:28.376 "is_configured": true, 00:25:28.376 "data_offset": 2048, 00:25:28.376 "data_size": 63488 00:25:28.376 }, 00:25:28.376 { 00:25:28.376 "name": "BaseBdev4", 00:25:28.376 "uuid": "47478364-ec30-5f6c-af1d-9c5677e18953", 00:25:28.376 "is_configured": true, 00:25:28.376 "data_offset": 2048, 00:25:28.376 "data_size": 63488 00:25:28.376 } 00:25:28.376 ] 00:25:28.376 }' 00:25:28.376 16:42:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:28.376 16:42:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:28.376 16:42:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:28.376 16:42:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:28.376 16:42:59 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.376 16:42:59 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:28.635 16:42:59 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:25:28.635 16:42:59 -- bdev/bdev_raid.sh@709 -- # killprocess 142067 00:25:28.635 16:42:59 -- common/autotest_common.sh@926 -- # '[' -z 142067 ']' 00:25:28.635 16:42:59 -- common/autotest_common.sh@930 -- # kill -0 142067 00:25:28.635 16:42:59 -- common/autotest_common.sh@931 -- # uname 00:25:28.635 16:42:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:28.635 16:42:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142067 00:25:28.635 16:43:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:28.635 16:43:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:28.635 16:43:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142067' 00:25:28.635 killing process with pid 142067 00:25:28.635 16:43:00 -- common/autotest_common.sh@945 -- # kill 142067 00:25:28.635 Received shutdown signal, test time was about 60.000000 seconds 00:25:28.635 00:25:28.635 Latency(us) 00:25:28.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.635 =================================================================================================================== 00:25:28.635 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:28.635 16:43:00 -- common/autotest_common.sh@950 -- # wait 142067 00:25:28.635 [2024-07-13 16:43:00.016020] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:28.635 [2024-07-13 16:43:00.016128] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.635 [2024-07-13 16:43:00.016256] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.635 [2024-07-13 16:43:00.016466] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:25:28.635 [2024-07-13 16:43:00.105522] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:29.205 16:43:00 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:29.205 00:25:29.205 real 0m27.129s 00:25:29.205 user 0m40.320s 00:25:29.205 sys 0m4.130s 00:25:29.205 16:43:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.205 16:43:00 -- common/autotest_common.sh@10 -- # set +x 00:25:29.205 ************************************ 00:25:29.205 END TEST raid5f_rebuild_test_sb 00:25:29.205 ************************************ 00:25:29.205 16:43:00 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:25:29.205 ************************************ 00:25:29.205 END TEST bdev_raid 00:25:29.205 ************************************ 00:25:29.205 00:25:29.205 real 11m26.518s 00:25:29.205 user 18m41.181s 00:25:29.205 sys 2m9.110s 00:25:29.205 16:43:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.205 16:43:00 -- common/autotest_common.sh@10 -- # set +x 00:25:29.205 16:43:00 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:25:29.205 16:43:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:29.205 16:43:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:29.205 16:43:00 -- common/autotest_common.sh@10 -- # set +x 00:25:29.205 ************************************ 00:25:29.205 START TEST bdevperf_config 00:25:29.205 ************************************ 00:25:29.205 16:43:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:25:29.464 * Looking for test storage... 00:25:29.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:25:29.464 16:43:00 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:25:29.465 16:43:00 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:25:29.465 16:43:00 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:25:29.465 16:43:00 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:29.465 16:43:00 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:29.465 16:43:00 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:25:29.465 16:43:00 -- bdevperf/common.sh@8 -- # local job_section=global 00:25:29.465 16:43:00 -- bdevperf/common.sh@9 -- # local rw=read 00:25:29.465 16:43:00 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:25:29.465 16:43:00 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:25:29.465 16:43:00 -- bdevperf/common.sh@13 -- # cat 00:25:29.465 16:43:00 -- bdevperf/common.sh@18 -- # job='[global]' 00:25:29.465 16:43:00 -- bdevperf/common.sh@19 -- # echo 00:25:29.465 00:25:29.465 16:43:00 -- bdevperf/common.sh@20 -- # cat 00:25:29.465 16:43:00 -- bdevperf/test_config.sh@18 -- # create_job job0 00:25:29.465 16:43:00 -- bdevperf/common.sh@8 -- # local job_section=job0 00:25:29.465 16:43:00 -- bdevperf/common.sh@9 -- # local rw= 00:25:29.465 16:43:00 -- bdevperf/common.sh@10 -- # local filename= 00:25:29.465 16:43:00 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:25:29.465 16:43:00 -- bdevperf/common.sh@18 -- # job='[job0]' 00:25:29.465 16:43:00 -- bdevperf/common.sh@19 -- # echo 00:25:29.465 00:25:29.465 16:43:00 -- bdevperf/common.sh@20 -- # cat 00:25:29.465 16:43:00 -- bdevperf/test_config.sh@19 -- # create_job job1 00:25:29.465 16:43:00 -- bdevperf/common.sh@8 -- # local job_section=job1 00:25:29.465 16:43:00 -- bdevperf/common.sh@9 -- # local rw= 00:25:29.465 16:43:00 -- bdevperf/common.sh@10 -- # local filename= 00:25:29.465 16:43:00 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:25:29.465 16:43:00 -- bdevperf/common.sh@18 -- # job='[job1]' 00:25:29.465 16:43:00 -- bdevperf/common.sh@19 -- # echo 00:25:29.465 00:25:29.465 16:43:00 -- bdevperf/common.sh@20 -- # cat 00:25:29.465 16:43:00 -- bdevperf/test_config.sh@20 -- # create_job job2 00:25:29.465 16:43:00 -- bdevperf/common.sh@8 -- # local job_section=job2 00:25:29.465 16:43:00 -- bdevperf/common.sh@9 -- # local rw= 00:25:29.465 16:43:00 -- bdevperf/common.sh@10 -- # local filename= 00:25:29.465 16:43:00 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:25:29.465 16:43:00 -- bdevperf/common.sh@18 -- # job='[job2]' 00:25:29.465 16:43:00 -- bdevperf/common.sh@19 -- # echo 00:25:29.465 00:25:29.465 16:43:00 -- bdevperf/common.sh@20 -- # cat 00:25:29.465 16:43:00 -- bdevperf/test_config.sh@21 -- # create_job job3 00:25:29.465 16:43:00 -- bdevperf/common.sh@8 -- # local job_section=job3 00:25:29.465 16:43:00 -- bdevperf/common.sh@9 -- # local rw= 00:25:29.465 16:43:00 -- bdevperf/common.sh@10 -- # local filename= 00:25:29.465 16:43:00 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:25:29.465 16:43:00 -- bdevperf/common.sh@18 -- # job='[job3]' 00:25:29.465 16:43:00 -- bdevperf/common.sh@19 -- # echo 00:25:29.465 00:25:29.465 16:43:00 -- bdevperf/common.sh@20 -- # cat 00:25:29.465 16:43:00 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:32.753 16:43:03 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-13 16:43:00.838647] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:32.753 [2024-07-13 16:43:00.838912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142812 ] 00:25:32.753 Using job config with 4 jobs 00:25:32.753 [2024-07-13 16:43:00.990652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.754 [2024-07-13 16:43:01.076915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.754 cpumask for '\''job0'\'' is too big 00:25:32.754 cpumask for '\''job1'\'' is too big 00:25:32.754 cpumask for '\''job2'\'' is too big 00:25:32.754 cpumask for '\''job3'\'' is too big 00:25:32.754 Running I/O for 2 seconds... 00:25:32.754 00:25:32.754 Latency(us) 00:25:32.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.754 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:32.754 Malloc0 : 2.01 35545.88 34.71 0.00 0.00 7196.00 1365.33 11172.33 00:25:32.754 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:32.754 Malloc0 : 2.02 35554.07 34.72 0.00 0.00 7183.15 1295.12 9799.19 00:25:32.754 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:32.754 Malloc0 : 2.02 35532.91 34.70 0.00 0.00 7175.97 1310.72 8488.47 00:25:32.754 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:32.754 Malloc0 : 2.02 35510.78 34.68 0.00 0.00 7169.88 1310.72 8238.81 00:25:32.754 =================================================================================================================== 00:25:32.754 Total : 142143.64 138.81 0.00 0.00 7181.23 1295.12 11172.33' 00:25:32.754 16:43:03 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-13 16:43:00.838647] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:32.754 [2024-07-13 16:43:00.838912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142812 ] 00:25:32.754 Using job config with 4 jobs 00:25:32.754 [2024-07-13 16:43:00.990652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.754 [2024-07-13 16:43:01.076915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.754 cpumask for '\''job0'\'' is too big 00:25:32.754 cpumask for '\''job1'\'' is too big 00:25:32.754 cpumask for '\''job2'\'' is too big 00:25:32.754 cpumask for '\''job3'\'' is too big 00:25:32.754 Running I/O for 2 seconds... 00:25:32.754 00:25:32.754 Latency(us) 00:25:32.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.754 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:32.754 Malloc0 : 2.01 35545.88 34.71 0.00 0.00 7196.00 1365.33 11172.33 00:25:32.754 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:32.754 Malloc0 : 2.02 35554.07 34.72 0.00 0.00 7183.15 1295.12 9799.19 00:25:32.754 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:32.754 Malloc0 : 2.02 35532.91 34.70 0.00 0.00 7175.97 1310.72 8488.47 00:25:32.754 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:32.754 Malloc0 : 2.02 35510.78 34.68 0.00 0.00 7169.88 1310.72 8238.81 00:25:32.754 =================================================================================================================== 00:25:32.754 Total : 142143.64 138.81 0.00 0.00 7181.23 1295.12 11172.33' 00:25:32.754 16:43:03 -- bdevperf/common.sh@32 -- # echo '[2024-07-13 16:43:00.838647] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:32.754 [2024-07-13 16:43:00.838912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142812 ] 00:25:32.754 Using job config with 4 jobs 00:25:32.754 [2024-07-13 16:43:00.990652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.754 [2024-07-13 16:43:01.076915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.754 cpumask for '\''job0'\'' is too big 00:25:32.754 cpumask for '\''job1'\'' is too big 00:25:32.754 cpumask for '\''job2'\'' is too big 00:25:32.754 cpumask for '\''job3'\'' is too big 00:25:32.754 Running I/O for 2 seconds... 00:25:32.754 00:25:32.754 Latency(us) 00:25:32.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.754 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:32.754 Malloc0 : 2.01 35545.88 34.71 0.00 0.00 7196.00 1365.33 11172.33 00:25:32.754 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:32.754 Malloc0 : 2.02 35554.07 34.72 0.00 0.00 7183.15 1295.12 9799.19 00:25:32.754 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:32.754 Malloc0 : 2.02 35532.91 34.70 0.00 0.00 7175.97 1310.72 8488.47 00:25:32.754 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:32.754 Malloc0 : 2.02 35510.78 34.68 0.00 0.00 7169.88 1310.72 8238.81 00:25:32.754 =================================================================================================================== 00:25:32.754 Total : 142143.64 138.81 0.00 0.00 7181.23 1295.12 11172.33' 00:25:32.754 16:43:03 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:25:32.754 16:43:03 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:25:32.754 16:43:03 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:25:32.754 16:43:03 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:32.754 [2024-07-13 16:43:03.835870] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:32.754 [2024-07-13 16:43:03.836297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142856 ] 00:25:32.754 [2024-07-13 16:43:03.976949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.754 [2024-07-13 16:43:04.060933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.018 cpumask for 'job0' is too big 00:25:33.018 cpumask for 'job1' is too big 00:25:33.018 cpumask for 'job2' is too big 00:25:33.018 cpumask for 'job3' is too big 00:25:35.569 16:43:06 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:25:35.569 Running I/O for 2 seconds... 00:25:35.569 00:25:35.569 Latency(us) 00:25:35.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.569 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:35.569 Malloc0 : 2.01 35557.77 34.72 0.00 0.00 7193.41 1458.96 11609.23 00:25:35.569 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:35.570 Malloc0 : 2.02 35564.45 34.73 0.00 0.00 7180.72 1419.95 10173.68 00:25:35.570 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:35.570 Malloc0 : 2.02 35540.92 34.71 0.00 0.00 7173.78 1357.53 8738.13 00:25:35.570 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:35.570 Malloc0 : 2.02 35518.19 34.69 0.00 0.00 7166.77 1302.92 7739.49 00:25:35.570 =================================================================================================================== 00:25:35.570 Total : 142181.33 138.85 0.00 0.00 7178.66 1302.92 11609.23' 00:25:35.570 16:43:06 -- bdevperf/test_config.sh@27 -- # cleanup 00:25:35.570 16:43:06 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:35.570 00:25:35.570 16:43:06 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:25:35.570 16:43:06 -- bdevperf/common.sh@8 -- # local job_section=job0 00:25:35.570 16:43:06 -- bdevperf/common.sh@9 -- # local rw=write 00:25:35.570 16:43:06 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:25:35.570 16:43:06 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:25:35.570 16:43:06 -- bdevperf/common.sh@18 -- # job='[job0]' 00:25:35.570 16:43:06 -- bdevperf/common.sh@19 -- # echo 00:25:35.570 16:43:06 -- bdevperf/common.sh@20 -- # cat 00:25:35.570 16:43:06 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:25:35.570 16:43:06 -- bdevperf/common.sh@8 -- # local job_section=job1 00:25:35.570 16:43:06 -- bdevperf/common.sh@9 -- # local rw=write 00:25:35.570 16:43:06 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:25:35.570 16:43:06 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:25:35.570 16:43:06 -- bdevperf/common.sh@18 -- # job='[job1]' 00:25:35.570 16:43:06 -- bdevperf/common.sh@19 -- # echo 00:25:35.570 00:25:35.570 16:43:06 -- bdevperf/common.sh@20 -- # cat 00:25:35.570 16:43:06 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:25:35.570 16:43:06 -- bdevperf/common.sh@8 -- # local job_section=job2 00:25:35.570 16:43:06 -- bdevperf/common.sh@9 -- # local rw=write 00:25:35.570 16:43:06 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:25:35.570 16:43:06 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:25:35.570 16:43:06 -- bdevperf/common.sh@18 -- # job='[job2]' 00:25:35.570 16:43:06 -- bdevperf/common.sh@19 -- # echo 00:25:35.570 00:25:35.570 16:43:06 -- bdevperf/common.sh@20 -- # cat 00:25:35.570 16:43:06 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:38.858 16:43:09 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-13 16:43:06.862300] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:38.858 [2024-07-13 16:43:06.862584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142895 ] 00:25:38.858 Using job config with 3 jobs 00:25:38.859 [2024-07-13 16:43:07.019403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.859 [2024-07-13 16:43:07.114302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.859 cpumask for '\''job0'\'' is too big 00:25:38.859 cpumask for '\''job1'\'' is too big 00:25:38.859 cpumask for '\''job2'\'' is too big 00:25:38.859 Running I/O for 2 seconds... 00:25:38.859 00:25:38.859 Latency(us) 00:25:38.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.859 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:38.859 Malloc0 : 2.01 47740.91 46.62 0.00 0.00 5356.85 1443.35 8426.06 00:25:38.859 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:38.859 Malloc0 : 2.01 47711.11 46.59 0.00 0.00 5351.88 1443.35 7115.34 00:25:38.859 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:38.859 Malloc0 : 2.01 47680.04 46.56 0.00 0.00 5346.84 1357.53 6116.69 00:25:38.859 =================================================================================================================== 00:25:38.859 Total : 143132.06 139.78 0.00 0.00 5351.86 1357.53 8426.06' 00:25:38.859 16:43:09 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-13 16:43:06.862300] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:38.859 [2024-07-13 16:43:06.862584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142895 ] 00:25:38.859 Using job config with 3 jobs 00:25:38.859 [2024-07-13 16:43:07.019403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.859 [2024-07-13 16:43:07.114302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.859 cpumask for '\''job0'\'' is too big 00:25:38.859 cpumask for '\''job1'\'' is too big 00:25:38.859 cpumask for '\''job2'\'' is too big 00:25:38.859 Running I/O for 2 seconds... 00:25:38.859 00:25:38.859 Latency(us) 00:25:38.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.859 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:38.859 Malloc0 : 2.01 47740.91 46.62 0.00 0.00 5356.85 1443.35 8426.06 00:25:38.859 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:38.859 Malloc0 : 2.01 47711.11 46.59 0.00 0.00 5351.88 1443.35 7115.34 00:25:38.859 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:38.859 Malloc0 : 2.01 47680.04 46.56 0.00 0.00 5346.84 1357.53 6116.69 00:25:38.859 =================================================================================================================== 00:25:38.859 Total : 143132.06 139.78 0.00 0.00 5351.86 1357.53 8426.06' 00:25:38.859 16:43:09 -- bdevperf/common.sh@32 -- # echo '[2024-07-13 16:43:06.862300] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:38.859 [2024-07-13 16:43:06.862584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142895 ] 00:25:38.859 Using job config with 3 jobs 00:25:38.859 [2024-07-13 16:43:07.019403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.859 [2024-07-13 16:43:07.114302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.859 cpumask for '\''job0'\'' is too big 00:25:38.859 cpumask for '\''job1'\'' is too big 00:25:38.859 cpumask for '\''job2'\'' is too big 00:25:38.859 Running I/O for 2 seconds... 00:25:38.859 00:25:38.859 Latency(us) 00:25:38.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.859 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:38.859 Malloc0 : 2.01 47740.91 46.62 0.00 0.00 5356.85 1443.35 8426.06 00:25:38.859 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:38.859 Malloc0 : 2.01 47711.11 46.59 0.00 0.00 5351.88 1443.35 7115.34 00:25:38.859 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:25:38.859 Malloc0 : 2.01 47680.04 46.56 0.00 0.00 5346.84 1357.53 6116.69 00:25:38.859 =================================================================================================================== 00:25:38.859 Total : 143132.06 139.78 0.00 0.00 5351.86 1357.53 8426.06' 00:25:38.859 16:43:09 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:25:38.859 16:43:09 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:25:38.859 16:43:09 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:25:38.859 16:43:09 -- bdevperf/test_config.sh@35 -- # cleanup 00:25:38.859 16:43:09 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:38.859 16:43:09 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:25:38.859 16:43:09 -- bdevperf/common.sh@8 -- # local job_section=global 00:25:38.859 16:43:09 -- bdevperf/common.sh@9 -- # local rw=rw 00:25:38.859 16:43:09 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:25:38.859 16:43:09 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:25:38.859 16:43:09 -- bdevperf/common.sh@13 -- # cat 00:25:38.859 16:43:09 -- bdevperf/common.sh@18 -- # job='[global]' 00:25:38.859 16:43:09 -- bdevperf/common.sh@19 -- # echo 00:25:38.859 00:25:38.859 16:43:09 -- bdevperf/common.sh@20 -- # cat 00:25:38.859 16:43:09 -- bdevperf/test_config.sh@38 -- # create_job job0 00:25:38.859 16:43:09 -- bdevperf/common.sh@8 -- # local job_section=job0 00:25:38.859 16:43:09 -- bdevperf/common.sh@9 -- # local rw= 00:25:38.859 16:43:09 -- bdevperf/common.sh@10 -- # local filename= 00:25:38.859 16:43:09 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:25:38.859 16:43:09 -- bdevperf/common.sh@18 -- # job='[job0]' 00:25:38.859 16:43:09 -- bdevperf/common.sh@19 -- # echo 00:25:38.859 00:25:38.859 16:43:09 -- bdevperf/common.sh@20 -- # cat 00:25:38.859 16:43:09 -- bdevperf/test_config.sh@39 -- # create_job job1 00:25:38.859 16:43:09 -- bdevperf/common.sh@8 -- # local job_section=job1 00:25:38.859 16:43:09 -- bdevperf/common.sh@9 -- # local rw= 00:25:38.859 16:43:09 -- bdevperf/common.sh@10 -- # local filename= 00:25:38.859 16:43:09 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:25:38.859 16:43:09 -- bdevperf/common.sh@18 -- # job='[job1]' 00:25:38.859 16:43:09 -- bdevperf/common.sh@19 -- # echo 00:25:38.859 00:25:38.859 16:43:09 -- bdevperf/common.sh@20 -- # cat 00:25:38.859 16:43:09 -- bdevperf/test_config.sh@40 -- # create_job job2 00:25:38.859 16:43:09 -- bdevperf/common.sh@8 -- # local job_section=job2 00:25:38.859 16:43:09 -- bdevperf/common.sh@9 -- # local rw= 00:25:38.859 16:43:09 -- bdevperf/common.sh@10 -- # local filename= 00:25:38.859 16:43:09 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:25:38.859 16:43:09 -- bdevperf/common.sh@18 -- # job='[job2]' 00:25:38.859 16:43:09 -- bdevperf/common.sh@19 -- # echo 00:25:38.859 00:25:38.859 16:43:09 -- bdevperf/common.sh@20 -- # cat 00:25:38.859 16:43:09 -- bdevperf/test_config.sh@41 -- # create_job job3 00:25:38.859 16:43:09 -- bdevperf/common.sh@8 -- # local job_section=job3 00:25:38.859 16:43:09 -- bdevperf/common.sh@9 -- # local rw= 00:25:38.859 16:43:09 -- bdevperf/common.sh@10 -- # local filename= 00:25:38.859 16:43:09 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:25:38.859 16:43:09 -- bdevperf/common.sh@18 -- # job='[job3]' 00:25:38.859 16:43:09 -- bdevperf/common.sh@19 -- # echo 00:25:38.859 00:25:38.859 16:43:09 -- bdevperf/common.sh@20 -- # cat 00:25:38.859 16:43:09 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:41.404 16:43:12 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-13 16:43:09.915450] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:41.404 [2024-07-13 16:43:09.916300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142946 ] 00:25:41.404 Using job config with 4 jobs 00:25:41.404 [2024-07-13 16:43:10.072159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.404 [2024-07-13 16:43:10.155954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.404 cpumask for '\''job0'\'' is too big 00:25:41.404 cpumask for '\''job1'\'' is too big 00:25:41.404 cpumask for '\''job2'\'' is too big 00:25:41.404 cpumask for '\''job3'\'' is too big 00:25:41.404 Running I/O for 2 seconds... 00:25:41.404 00:25:41.404 Latency(us) 00:25:41.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.404 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc0 : 2.02 17651.74 17.24 0.00 0.00 14492.99 2964.72 24466.77 00:25:41.404 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc1 : 2.03 17658.18 17.24 0.00 0.00 14477.77 3510.86 24341.94 00:25:41.404 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc0 : 2.03 17647.59 17.23 0.00 0.00 14447.85 2824.29 21346.01 00:25:41.404 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc1 : 2.03 17636.93 17.22 0.00 0.00 14445.87 3464.05 21221.18 00:25:41.404 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc0 : 2.03 17626.39 17.21 0.00 0.00 14414.31 2855.50 18350.08 00:25:41.404 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc1 : 2.03 17615.08 17.20 0.00 0.00 14414.70 3448.44 18100.42 00:25:41.404 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc0 : 2.04 17604.53 17.19 0.00 0.00 14385.48 2917.91 15478.98 00:25:41.404 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc1 : 2.04 17593.85 17.18 0.00 0.00 14385.02 3417.23 15478.98 00:25:41.404 =================================================================================================================== 00:25:41.404 Total : 141034.29 137.73 0.00 0.00 14432.94 2824.29 24466.77' 00:25:41.404 16:43:12 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-13 16:43:09.915450] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:41.404 [2024-07-13 16:43:09.916300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142946 ] 00:25:41.404 Using job config with 4 jobs 00:25:41.404 [2024-07-13 16:43:10.072159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.404 [2024-07-13 16:43:10.155954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.404 cpumask for '\''job0'\'' is too big 00:25:41.404 cpumask for '\''job1'\'' is too big 00:25:41.404 cpumask for '\''job2'\'' is too big 00:25:41.404 cpumask for '\''job3'\'' is too big 00:25:41.404 Running I/O for 2 seconds... 00:25:41.404 00:25:41.404 Latency(us) 00:25:41.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.404 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc0 : 2.02 17651.74 17.24 0.00 0.00 14492.99 2964.72 24466.77 00:25:41.404 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc1 : 2.03 17658.18 17.24 0.00 0.00 14477.77 3510.86 24341.94 00:25:41.404 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc0 : 2.03 17647.59 17.23 0.00 0.00 14447.85 2824.29 21346.01 00:25:41.404 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc1 : 2.03 17636.93 17.22 0.00 0.00 14445.87 3464.05 21221.18 00:25:41.404 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc0 : 2.03 17626.39 17.21 0.00 0.00 14414.31 2855.50 18350.08 00:25:41.404 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc1 : 2.03 17615.08 17.20 0.00 0.00 14414.70 3448.44 18100.42 00:25:41.404 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc0 : 2.04 17604.53 17.19 0.00 0.00 14385.48 2917.91 15478.98 00:25:41.404 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc1 : 2.04 17593.85 17.18 0.00 0.00 14385.02 3417.23 15478.98 00:25:41.404 =================================================================================================================== 00:25:41.404 Total : 141034.29 137.73 0.00 0.00 14432.94 2824.29 24466.77' 00:25:41.404 16:43:12 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:25:41.404 16:43:12 -- bdevperf/common.sh@32 -- # echo '[2024-07-13 16:43:09.915450] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:41.404 [2024-07-13 16:43:09.916300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142946 ] 00:25:41.404 Using job config with 4 jobs 00:25:41.404 [2024-07-13 16:43:10.072159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.404 [2024-07-13 16:43:10.155954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.404 cpumask for '\''job0'\'' is too big 00:25:41.404 cpumask for '\''job1'\'' is too big 00:25:41.404 cpumask for '\''job2'\'' is too big 00:25:41.404 cpumask for '\''job3'\'' is too big 00:25:41.404 Running I/O for 2 seconds... 00:25:41.404 00:25:41.404 Latency(us) 00:25:41.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.404 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc0 : 2.02 17651.74 17.24 0.00 0.00 14492.99 2964.72 24466.77 00:25:41.404 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc1 : 2.03 17658.18 17.24 0.00 0.00 14477.77 3510.86 24341.94 00:25:41.404 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc0 : 2.03 17647.59 17.23 0.00 0.00 14447.85 2824.29 21346.01 00:25:41.404 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc1 : 2.03 17636.93 17.22 0.00 0.00 14445.87 3464.05 21221.18 00:25:41.404 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc0 : 2.03 17626.39 17.21 0.00 0.00 14414.31 2855.50 18350.08 00:25:41.404 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc1 : 2.03 17615.08 17.20 0.00 0.00 14414.70 3448.44 18100.42 00:25:41.404 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc0 : 2.04 17604.53 17.19 0.00 0.00 14385.48 2917.91 15478.98 00:25:41.404 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:25:41.404 Malloc1 : 2.04 17593.85 17.18 0.00 0.00 14385.02 3417.23 15478.98 00:25:41.404 =================================================================================================================== 00:25:41.404 Total : 141034.29 137.73 0.00 0.00 14432.94 2824.29 24466.77' 00:25:41.404 16:43:12 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:25:41.664 16:43:12 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:25:41.664 16:43:12 -- bdevperf/test_config.sh@44 -- # cleanup 00:25:41.664 16:43:12 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:41.664 16:43:12 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:41.664 00:25:41.664 real 0m12.252s 00:25:41.664 user 0m10.286s 00:25:41.664 sys 0m1.403s 00:25:41.664 ************************************ 00:25:41.664 END TEST bdevperf_config 00:25:41.664 ************************************ 00:25:41.664 16:43:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:41.664 16:43:12 -- common/autotest_common.sh@10 -- # set +x 00:25:41.664 16:43:12 -- spdk/autotest.sh@198 -- # uname -s 00:25:41.664 16:43:12 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:25:41.664 16:43:12 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:25:41.664 16:43:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:41.664 16:43:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:41.664 16:43:12 -- common/autotest_common.sh@10 -- # set +x 00:25:41.664 ************************************ 00:25:41.664 START TEST reactor_set_interrupt 00:25:41.664 ************************************ 00:25:41.664 16:43:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:25:41.664 * Looking for test storage... 00:25:41.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:41.664 16:43:13 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:25:41.664 16:43:13 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:25:41.664 16:43:13 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:41.664 16:43:13 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:41.664 16:43:13 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:25:41.664 16:43:13 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:41.664 16:43:13 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:25:41.664 16:43:13 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:25:41.664 16:43:13 -- common/autotest_common.sh@34 -- # set -e 00:25:41.664 16:43:13 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:25:41.664 16:43:13 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:25:41.664 16:43:13 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:25:41.664 16:43:13 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:25:41.664 16:43:13 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:25:41.664 16:43:13 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:25:41.664 16:43:13 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:25:41.664 16:43:13 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:25:41.664 16:43:13 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:25:41.664 16:43:13 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:25:41.664 16:43:13 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:25:41.664 16:43:13 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:25:41.664 16:43:13 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:25:41.664 16:43:13 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:25:41.664 16:43:13 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:25:41.664 16:43:13 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:25:41.664 16:43:13 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:25:41.664 16:43:13 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:25:41.664 16:43:13 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:25:41.664 16:43:13 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:25:41.664 16:43:13 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:25:41.664 16:43:13 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:25:41.664 16:43:13 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:41.664 16:43:13 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:25:41.664 16:43:13 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:25:41.664 16:43:13 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:25:41.664 16:43:13 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:25:41.664 16:43:13 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:25:41.664 16:43:13 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:25:41.664 16:43:13 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:25:41.664 16:43:13 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:25:41.664 16:43:13 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:25:41.664 16:43:13 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:25:41.664 16:43:13 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:25:41.664 16:43:13 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:25:41.664 16:43:13 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:25:41.664 16:43:13 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:25:41.664 16:43:13 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:25:41.664 16:43:13 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:25:41.664 16:43:13 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:25:41.664 16:43:13 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:25:41.664 16:43:13 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:25:41.664 16:43:13 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:25:41.664 16:43:13 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:25:41.664 16:43:13 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:25:41.665 16:43:13 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:25:41.665 16:43:13 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:25:41.665 16:43:13 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:25:41.665 16:43:13 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:25:41.665 16:43:13 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:25:41.665 16:43:13 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:25:41.665 16:43:13 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:25:41.665 16:43:13 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:25:41.665 16:43:13 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:25:41.665 16:43:13 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:25:41.665 16:43:13 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:25:41.665 16:43:13 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:25:41.665 16:43:13 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:25:41.665 16:43:13 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:25:41.665 16:43:13 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:25:41.665 16:43:13 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:25:41.665 16:43:13 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:25:41.665 16:43:13 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:25:41.665 16:43:13 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:25:41.665 16:43:13 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:25:41.665 16:43:13 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:25:41.665 16:43:13 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:25:41.665 16:43:13 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:25:41.665 16:43:13 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:25:41.665 16:43:13 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:25:41.665 16:43:13 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:25:41.665 16:43:13 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:25:41.665 16:43:13 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:25:41.665 16:43:13 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:25:41.665 16:43:13 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:25:41.665 16:43:13 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:25:41.665 16:43:13 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:25:41.665 16:43:13 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:25:41.665 16:43:13 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:25:41.665 16:43:13 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:25:41.665 16:43:13 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:25:41.665 16:43:13 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:25:41.665 16:43:13 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:25:41.665 16:43:13 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:25:41.665 16:43:13 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:25:41.665 16:43:13 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:25:41.665 16:43:13 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:25:41.665 16:43:13 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:25:41.665 16:43:13 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:25:41.665 16:43:13 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:25:41.665 16:43:13 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:25:41.665 16:43:13 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:25:41.665 16:43:13 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:25:41.665 16:43:13 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:25:41.665 16:43:13 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:25:41.665 16:43:13 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:25:41.665 16:43:13 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:25:41.665 16:43:13 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:25:41.665 16:43:13 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:25:41.665 #define SPDK_CONFIG_H 00:25:41.665 #define SPDK_CONFIG_APPS 1 00:25:41.665 #define SPDK_CONFIG_ARCH native 00:25:41.665 #define SPDK_CONFIG_ASAN 1 00:25:41.665 #undef SPDK_CONFIG_AVAHI 00:25:41.665 #undef SPDK_CONFIG_CET 00:25:41.665 #define SPDK_CONFIG_COVERAGE 1 00:25:41.665 #define SPDK_CONFIG_CROSS_PREFIX 00:25:41.665 #undef SPDK_CONFIG_CRYPTO 00:25:41.665 #undef SPDK_CONFIG_CRYPTO_MLX5 00:25:41.665 #undef SPDK_CONFIG_CUSTOMOCF 00:25:41.665 #undef SPDK_CONFIG_DAOS 00:25:41.665 #define SPDK_CONFIG_DAOS_DIR 00:25:41.665 #define SPDK_CONFIG_DEBUG 1 00:25:41.665 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:25:41.665 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:25:41.665 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:25:41.665 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:25:41.665 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:25:41.665 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:41.665 #define SPDK_CONFIG_EXAMPLES 1 00:25:41.665 #undef SPDK_CONFIG_FC 00:25:41.665 #define SPDK_CONFIG_FC_PATH 00:25:41.665 #define SPDK_CONFIG_FIO_PLUGIN 1 00:25:41.665 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:25:41.665 #undef SPDK_CONFIG_FUSE 00:25:41.665 #undef SPDK_CONFIG_FUZZER 00:25:41.665 #define SPDK_CONFIG_FUZZER_LIB 00:25:41.665 #undef SPDK_CONFIG_GOLANG 00:25:41.665 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:25:41.665 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:25:41.665 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:25:41.665 #undef SPDK_CONFIG_HAVE_LIBBSD 00:25:41.665 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:25:41.665 #define SPDK_CONFIG_IDXD 1 00:25:41.665 #undef SPDK_CONFIG_IDXD_KERNEL 00:25:41.665 #undef SPDK_CONFIG_IPSEC_MB 00:25:41.665 #define SPDK_CONFIG_IPSEC_MB_DIR 00:25:41.665 #define SPDK_CONFIG_ISAL 1 00:25:41.665 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:25:41.665 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:25:41.665 #define SPDK_CONFIG_LIBDIR 00:25:41.665 #undef SPDK_CONFIG_LTO 00:25:41.665 #define SPDK_CONFIG_MAX_LCORES 00:25:41.665 #define SPDK_CONFIG_NVME_CUSE 1 00:25:41.665 #undef SPDK_CONFIG_OCF 00:25:41.665 #define SPDK_CONFIG_OCF_PATH 00:25:41.665 #define SPDK_CONFIG_OPENSSL_PATH 00:25:41.665 #undef SPDK_CONFIG_PGO_CAPTURE 00:25:41.665 #undef SPDK_CONFIG_PGO_USE 00:25:41.665 #define SPDK_CONFIG_PREFIX /usr/local 00:25:41.665 #define SPDK_CONFIG_RAID5F 1 00:25:41.665 #undef SPDK_CONFIG_RBD 00:25:41.665 #define SPDK_CONFIG_RDMA 1 00:25:41.665 #define SPDK_CONFIG_RDMA_PROV verbs 00:25:41.665 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:25:41.665 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:25:41.665 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:25:41.665 #undef SPDK_CONFIG_SHARED 00:25:41.665 #undef SPDK_CONFIG_SMA 00:25:41.665 #define SPDK_CONFIG_TESTS 1 00:25:41.665 #undef SPDK_CONFIG_TSAN 00:25:41.665 #undef SPDK_CONFIG_UBLK 00:25:41.665 #define SPDK_CONFIG_UBSAN 1 00:25:41.665 #define SPDK_CONFIG_UNIT_TESTS 1 00:25:41.665 #undef SPDK_CONFIG_URING 00:25:41.665 #define SPDK_CONFIG_URING_PATH 00:25:41.665 #undef SPDK_CONFIG_URING_ZNS 00:25:41.665 #undef SPDK_CONFIG_USDT 00:25:41.665 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:25:41.665 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:25:41.665 #undef SPDK_CONFIG_VFIO_USER 00:25:41.665 #define SPDK_CONFIG_VFIO_USER_DIR 00:25:41.665 #define SPDK_CONFIG_VHOST 1 00:25:41.665 #define SPDK_CONFIG_VIRTIO 1 00:25:41.665 #undef SPDK_CONFIG_VTUNE 00:25:41.665 #define SPDK_CONFIG_VTUNE_DIR 00:25:41.665 #define SPDK_CONFIG_WERROR 1 00:25:41.665 #define SPDK_CONFIG_WPDK_DIR 00:25:41.665 #undef SPDK_CONFIG_XNVME 00:25:41.665 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:25:41.665 16:43:13 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:25:41.665 16:43:13 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:41.665 16:43:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.665 16:43:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.665 16:43:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.665 16:43:13 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:41.665 16:43:13 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:41.665 16:43:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:41.665 16:43:13 -- paths/export.sh@5 -- # export PATH 00:25:41.665 16:43:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:41.665 16:43:13 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:25:41.665 16:43:13 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:25:41.665 16:43:13 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:25:41.665 16:43:13 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:25:41.926 16:43:13 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:25:41.926 16:43:13 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:25:41.926 16:43:13 -- pm/common@16 -- # TEST_TAG=N/A 00:25:41.926 16:43:13 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:25:41.926 16:43:13 -- common/autotest_common.sh@52 -- # : 1 00:25:41.926 16:43:13 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:25:41.926 16:43:13 -- common/autotest_common.sh@56 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:25:41.926 16:43:13 -- common/autotest_common.sh@58 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:25:41.926 16:43:13 -- common/autotest_common.sh@60 -- # : 1 00:25:41.926 16:43:13 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:25:41.926 16:43:13 -- common/autotest_common.sh@62 -- # : 1 00:25:41.926 16:43:13 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:25:41.926 16:43:13 -- common/autotest_common.sh@64 -- # : 00:25:41.926 16:43:13 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:25:41.926 16:43:13 -- common/autotest_common.sh@66 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:25:41.926 16:43:13 -- common/autotest_common.sh@68 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:25:41.926 16:43:13 -- common/autotest_common.sh@70 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:25:41.926 16:43:13 -- common/autotest_common.sh@72 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:25:41.926 16:43:13 -- common/autotest_common.sh@74 -- # : 1 00:25:41.926 16:43:13 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:25:41.926 16:43:13 -- common/autotest_common.sh@76 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:25:41.926 16:43:13 -- common/autotest_common.sh@78 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:25:41.926 16:43:13 -- common/autotest_common.sh@80 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:25:41.926 16:43:13 -- common/autotest_common.sh@82 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:25:41.926 16:43:13 -- common/autotest_common.sh@84 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:25:41.926 16:43:13 -- common/autotest_common.sh@86 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:25:41.926 16:43:13 -- common/autotest_common.sh@88 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:25:41.926 16:43:13 -- common/autotest_common.sh@90 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:25:41.926 16:43:13 -- common/autotest_common.sh@92 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:25:41.926 16:43:13 -- common/autotest_common.sh@94 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:25:41.926 16:43:13 -- common/autotest_common.sh@96 -- # : rdma 00:25:41.926 16:43:13 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:25:41.926 16:43:13 -- common/autotest_common.sh@98 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:25:41.926 16:43:13 -- common/autotest_common.sh@100 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:25:41.926 16:43:13 -- common/autotest_common.sh@102 -- # : 1 00:25:41.926 16:43:13 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:25:41.926 16:43:13 -- common/autotest_common.sh@104 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:25:41.926 16:43:13 -- common/autotest_common.sh@106 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:25:41.926 16:43:13 -- common/autotest_common.sh@108 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:25:41.926 16:43:13 -- common/autotest_common.sh@110 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:25:41.926 16:43:13 -- common/autotest_common.sh@112 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:25:41.926 16:43:13 -- common/autotest_common.sh@114 -- # : 1 00:25:41.926 16:43:13 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:25:41.926 16:43:13 -- common/autotest_common.sh@116 -- # : 1 00:25:41.926 16:43:13 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:25:41.926 16:43:13 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:25:41.926 16:43:13 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:25:41.926 16:43:13 -- common/autotest_common.sh@120 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:25:41.926 16:43:13 -- common/autotest_common.sh@122 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:25:41.926 16:43:13 -- common/autotest_common.sh@124 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:25:41.926 16:43:13 -- common/autotest_common.sh@126 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:25:41.926 16:43:13 -- common/autotest_common.sh@128 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:25:41.926 16:43:13 -- common/autotest_common.sh@130 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:25:41.926 16:43:13 -- common/autotest_common.sh@132 -- # : v22.11.4 00:25:41.926 16:43:13 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:25:41.926 16:43:13 -- common/autotest_common.sh@134 -- # : true 00:25:41.926 16:43:13 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:25:41.926 16:43:13 -- common/autotest_common.sh@136 -- # : 1 00:25:41.926 16:43:13 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:25:41.926 16:43:13 -- common/autotest_common.sh@138 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:25:41.926 16:43:13 -- common/autotest_common.sh@140 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:25:41.926 16:43:13 -- common/autotest_common.sh@142 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:25:41.926 16:43:13 -- common/autotest_common.sh@144 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:25:41.926 16:43:13 -- common/autotest_common.sh@146 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:25:41.926 16:43:13 -- common/autotest_common.sh@148 -- # : 00:25:41.926 16:43:13 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:25:41.926 16:43:13 -- common/autotest_common.sh@150 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:25:41.926 16:43:13 -- common/autotest_common.sh@152 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:25:41.926 16:43:13 -- common/autotest_common.sh@154 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:25:41.926 16:43:13 -- common/autotest_common.sh@156 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:25:41.926 16:43:13 -- common/autotest_common.sh@158 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:25:41.926 16:43:13 -- common/autotest_common.sh@160 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:25:41.926 16:43:13 -- common/autotest_common.sh@163 -- # : 00:25:41.926 16:43:13 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:25:41.926 16:43:13 -- common/autotest_common.sh@165 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:25:41.926 16:43:13 -- common/autotest_common.sh@167 -- # : 0 00:25:41.926 16:43:13 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:25:41.926 16:43:13 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:25:41.926 16:43:13 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:25:41.926 16:43:13 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:25:41.926 16:43:13 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:25:41.926 16:43:13 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:41.926 16:43:13 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:41.926 16:43:13 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:41.926 16:43:13 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:41.926 16:43:13 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:25:41.926 16:43:13 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:25:41.926 16:43:13 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:41.926 16:43:13 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:41.926 16:43:13 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:25:41.926 16:43:13 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:25:41.926 16:43:13 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:41.926 16:43:13 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:41.926 16:43:13 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:41.926 16:43:13 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:41.926 16:43:13 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:25:41.926 16:43:13 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:25:41.926 16:43:13 -- common/autotest_common.sh@196 -- # cat 00:25:41.926 16:43:13 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:25:41.926 16:43:13 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:41.926 16:43:13 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:41.926 16:43:13 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:41.926 16:43:13 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:41.926 16:43:13 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:25:41.926 16:43:13 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:25:41.926 16:43:13 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:25:41.926 16:43:13 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:25:41.926 16:43:13 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:25:41.926 16:43:13 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:25:41.926 16:43:13 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:25:41.926 16:43:13 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:25:41.926 16:43:13 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:25:41.926 16:43:13 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:25:41.926 16:43:13 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:25:41.926 16:43:13 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:25:41.926 16:43:13 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:41.926 16:43:13 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:41.926 16:43:13 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:25:41.926 16:43:13 -- common/autotest_common.sh@249 -- # export valgrind= 00:25:41.926 16:43:13 -- common/autotest_common.sh@249 -- # valgrind= 00:25:41.926 16:43:13 -- common/autotest_common.sh@255 -- # uname -s 00:25:41.926 16:43:13 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:25:41.926 16:43:13 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:25:41.926 16:43:13 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:25:41.926 16:43:13 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:25:41.926 16:43:13 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:25:41.926 16:43:13 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:25:41.926 16:43:13 -- common/autotest_common.sh@265 -- # MAKE=make 00:25:41.926 16:43:13 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:25:41.926 16:43:13 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:25:41.926 16:43:13 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:25:41.926 16:43:13 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:25:41.926 16:43:13 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:25:41.926 16:43:13 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:25:41.926 16:43:13 -- common/autotest_common.sh@309 -- # [[ -z 143020 ]] 00:25:41.926 16:43:13 -- common/autotest_common.sh@309 -- # kill -0 143020 00:25:41.926 16:43:13 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:25:41.926 16:43:13 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:25:41.926 16:43:13 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:25:41.926 16:43:13 -- common/autotest_common.sh@322 -- # local mount target_dir 00:25:41.926 16:43:13 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:25:41.926 16:43:13 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:25:41.926 16:43:13 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:25:41.926 16:43:13 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:25:41.927 16:43:13 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.33Y1IJ 00:25:41.927 16:43:13 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:25:41.927 16:43:13 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:25:41.927 16:43:13 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:25:41.927 16:43:13 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.33Y1IJ/tests/interrupt /tmp/spdk.33Y1IJ 00:25:41.927 16:43:13 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:25:41.927 16:43:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:41.927 16:43:13 -- common/autotest_common.sh@318 -- # df -T 00:25:41.927 16:43:13 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248956416 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:25:41.927 16:43:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=4726784 00:25:41.927 16:43:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=9439977472 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:25:41.927 16:43:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=11160039424 00:25:41.927 16:43:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267146240 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268403712 00:25:41.927 16:43:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:25:41.927 16:43:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:25:41.927 16:43:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:25:41.927 16:43:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:25:41.927 16:43:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:25:41.927 16:43:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:25:41.927 16:43:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:25:41.927 16:43:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:25:41.927 16:43:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=97105465344 00:25:41.927 16:43:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:25:41.927 16:43:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=2597314560 00:25:41.927 16:43:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:41.927 16:43:13 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:25:41.927 * Looking for test storage... 00:25:41.927 16:43:13 -- common/autotest_common.sh@359 -- # local target_space new_size 00:25:41.927 16:43:13 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:25:41.927 16:43:13 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:25:41.927 16:43:13 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:41.927 16:43:13 -- common/autotest_common.sh@363 -- # mount=/ 00:25:41.927 16:43:13 -- common/autotest_common.sh@365 -- # target_space=9439977472 00:25:41.927 16:43:13 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:25:41.927 16:43:13 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:25:41.927 16:43:13 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:25:41.927 16:43:13 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:25:41.927 16:43:13 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:25:41.927 16:43:13 -- common/autotest_common.sh@372 -- # new_size=13374631936 00:25:41.927 16:43:13 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:25:41.927 16:43:13 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:41.927 16:43:13 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:41.927 16:43:13 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:41.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:41.927 16:43:13 -- common/autotest_common.sh@380 -- # return 0 00:25:41.927 16:43:13 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:25:41.927 16:43:13 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:25:41.927 16:43:13 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:25:41.927 16:43:13 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:25:41.927 16:43:13 -- common/autotest_common.sh@1672 -- # true 00:25:41.927 16:43:13 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:25:41.927 16:43:13 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:25:41.927 16:43:13 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:25:41.927 16:43:13 -- common/autotest_common.sh@27 -- # exec 00:25:41.927 16:43:13 -- common/autotest_common.sh@29 -- # exec 00:25:41.927 16:43:13 -- common/autotest_common.sh@31 -- # xtrace_restore 00:25:41.927 16:43:13 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:25:41.927 16:43:13 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:25:41.927 16:43:13 -- common/autotest_common.sh@18 -- # set -x 00:25:41.927 16:43:13 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:41.927 16:43:13 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:25:41.927 16:43:13 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:25:41.927 16:43:13 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:25:41.927 16:43:13 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:25:41.927 16:43:13 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:25:41.927 16:43:13 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:25:41.927 16:43:13 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:25:41.927 16:43:13 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:25:41.927 16:43:13 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.927 16:43:13 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:25:41.927 16:43:13 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=143068 00:25:41.927 16:43:13 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:41.927 16:43:13 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 143068 /var/tmp/spdk.sock 00:25:41.927 16:43:13 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:25:41.927 16:43:13 -- common/autotest_common.sh@819 -- # '[' -z 143068 ']' 00:25:41.927 16:43:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.927 16:43:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:41.927 16:43:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.927 16:43:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:41.927 16:43:13 -- common/autotest_common.sh@10 -- # set +x 00:25:41.927 [2024-07-13 16:43:13.292316] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:41.927 [2024-07-13 16:43:13.292655] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143068 ] 00:25:42.186 [2024-07-13 16:43:13.459748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:42.186 [2024-07-13 16:43:13.535707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.186 [2024-07-13 16:43:13.535861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.186 [2024-07-13 16:43:13.535863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.186 [2024-07-13 16:43:13.649151] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:43.124 16:43:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:43.124 16:43:14 -- common/autotest_common.sh@852 -- # return 0 00:25:43.124 16:43:14 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:25:43.124 16:43:14 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:43.124 Malloc0 00:25:43.124 Malloc1 00:25:43.124 Malloc2 00:25:43.124 16:43:14 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:25:43.124 16:43:14 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:25:43.124 16:43:14 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:25:43.124 16:43:14 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:25:43.124 5000+0 records in 00:25:43.124 5000+0 records out 00:25:43.124 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0375457 s, 273 MB/s 00:25:43.124 16:43:14 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:25:43.384 AIO0 00:25:43.384 16:43:14 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 143068 00:25:43.384 16:43:14 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 143068 without_thd 00:25:43.384 16:43:14 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=143068 00:25:43.384 16:43:14 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:25:43.384 16:43:14 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:25:43.384 16:43:14 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:25:43.384 16:43:14 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:25:43.384 16:43:14 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:25:43.384 16:43:14 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:25:43.384 16:43:14 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:43.384 16:43:14 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:25:43.384 16:43:14 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:43.643 16:43:15 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:25:43.643 16:43:15 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:25:43.643 16:43:15 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:25:43.643 16:43:15 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:25:43.643 16:43:15 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:25:43.643 16:43:15 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:25:43.643 16:43:15 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:43.643 16:43:15 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:25:43.643 16:43:15 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:43.902 16:43:15 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:25:43.902 16:43:15 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:25:43.902 16:43:15 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:25:43.902 spdk_thread ids are 1 on reactor0. 00:25:43.902 16:43:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:43.902 16:43:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143068 0 00:25:43.902 16:43:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143068 0 idle 00:25:43.902 16:43:15 -- interrupt/interrupt_common.sh@33 -- # local pid=143068 00:25:43.902 16:43:15 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:43.902 16:43:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:43.902 16:43:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:43.902 16:43:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:43.902 16:43:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:43.902 16:43:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:43.902 16:43:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:43.902 16:43:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143068 -w 256 00:25:43.902 16:43:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143068 root 20 0 20.1t 57880 25816 S 6.7 0.5 0:00.38 reactor_0' 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@48 -- # echo 143068 root 20 0 20.1t 57880 25816 S 6.7 0.5 0:00.38 reactor_0 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:44.161 16:43:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:44.161 16:43:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143068 1 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143068 1 idle 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@33 -- # local pid=143068 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143068 -w 256 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143072 root 20 0 20.1t 57880 25816 S 0.0 0.5 0:00.00 reactor_1' 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@48 -- # echo 143072 root 20 0 20.1t 57880 25816 S 0.0 0.5 0:00.00 reactor_1 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:44.161 16:43:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:44.162 16:43:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:44.162 16:43:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:44.162 16:43:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:44.162 16:43:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143068 2 00:25:44.162 16:43:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143068 2 idle 00:25:44.162 16:43:15 -- interrupt/interrupt_common.sh@33 -- # local pid=143068 00:25:44.162 16:43:15 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:44.162 16:43:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:44.162 16:43:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:44.162 16:43:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:44.162 16:43:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:44.162 16:43:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:44.162 16:43:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:44.162 16:43:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:44.162 16:43:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143068 -w 256 00:25:44.421 16:43:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143073 root 20 0 20.1t 57880 25816 S 0.0 0.5 0:00.00 reactor_2' 00:25:44.421 16:43:15 -- interrupt/interrupt_common.sh@48 -- # echo 143073 root 20 0 20.1t 57880 25816 S 0.0 0.5 0:00.00 reactor_2 00:25:44.421 16:43:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:44.421 16:43:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:44.421 16:43:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:44.421 16:43:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:44.421 16:43:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:44.421 16:43:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:44.421 16:43:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:44.421 16:43:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:44.421 16:43:15 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:25:44.421 16:43:15 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:25:44.421 16:43:15 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:25:44.681 [2024-07-13 16:43:16.038065] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:44.681 16:43:16 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:25:44.940 [2024-07-13 16:43:16.297842] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:25:44.940 [2024-07-13 16:43:16.298990] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:44.940 16:43:16 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:25:45.198 [2024-07-13 16:43:16.537516] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:25:45.198 [2024-07-13 16:43:16.538323] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:45.198 16:43:16 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:25:45.198 16:43:16 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143068 0 00:25:45.198 16:43:16 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143068 0 busy 00:25:45.198 16:43:16 -- interrupt/interrupt_common.sh@33 -- # local pid=143068 00:25:45.198 16:43:16 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:45.198 16:43:16 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:25:45.198 16:43:16 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:25:45.198 16:43:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:45.198 16:43:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:45.198 16:43:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:45.199 16:43:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143068 -w 256 00:25:45.199 16:43:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143068 root 20 0 20.1t 58028 25816 R 99.9 0.5 0:00.81 reactor_0' 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@48 -- # echo 143068 root 20 0 20.1t 58028 25816 R 99.9 0.5 0:00.81 reactor_0 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:45.457 16:43:16 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:25:45.457 16:43:16 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143068 2 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143068 2 busy 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@33 -- # local pid=143068 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143068 -w 256 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143073 root 20 0 20.1t 58028 25816 R 99.9 0.5 0:00.35 reactor_2' 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@48 -- # echo 143073 root 20 0 20.1t 58028 25816 R 99.9 0.5 0:00.35 reactor_2 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:25:45.457 16:43:16 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:25:45.458 16:43:16 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:25:45.458 16:43:16 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:25:45.458 16:43:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:45.458 16:43:16 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:25:45.716 [2024-07-13 16:43:17.141495] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:25:45.716 [2024-07-13 16:43:17.142248] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:45.716 16:43:17 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:25:45.716 16:43:17 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 143068 2 00:25:45.716 16:43:17 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143068 2 idle 00:25:45.716 16:43:17 -- interrupt/interrupt_common.sh@33 -- # local pid=143068 00:25:45.716 16:43:17 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:45.716 16:43:17 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:45.716 16:43:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:45.716 16:43:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:45.716 16:43:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:45.716 16:43:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:45.716 16:43:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:45.716 16:43:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143068 -w 256 00:25:45.716 16:43:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:45.975 16:43:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143073 root 20 0 20.1t 58076 25816 S 0.0 0.5 0:00.60 reactor_2' 00:25:45.975 16:43:17 -- interrupt/interrupt_common.sh@48 -- # echo 143073 root 20 0 20.1t 58076 25816 S 0.0 0.5 0:00.60 reactor_2 00:25:45.975 16:43:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:45.975 16:43:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:45.975 16:43:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:45.975 16:43:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:45.975 16:43:17 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:45.975 16:43:17 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:45.975 16:43:17 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:45.975 16:43:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:45.975 16:43:17 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:25:46.234 [2024-07-13 16:43:17.485527] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:25:46.234 [2024-07-13 16:43:17.486280] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:46.234 16:43:17 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:25:46.234 16:43:17 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:25:46.234 16:43:17 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:25:46.493 [2024-07-13 16:43:17.737982] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:46.493 16:43:17 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 143068 0 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143068 0 idle 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@33 -- # local pid=143068 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143068 -w 256 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143068 root 20 0 20.1t 58176 25816 S 0.0 0.5 0:01.58 reactor_0' 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@48 -- # echo 143068 root 20 0 20.1t 58176 25816 S 0.0 0.5 0:01.58 reactor_0 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:46.493 16:43:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:46.493 16:43:17 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:25:46.493 16:43:17 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:25:46.493 16:43:17 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:25:46.493 16:43:17 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 143068 00:25:46.493 16:43:17 -- common/autotest_common.sh@926 -- # '[' -z 143068 ']' 00:25:46.493 16:43:17 -- common/autotest_common.sh@930 -- # kill -0 143068 00:25:46.493 16:43:17 -- common/autotest_common.sh@931 -- # uname 00:25:46.493 16:43:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:46.493 16:43:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 143068 00:25:46.752 16:43:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:46.752 16:43:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:46.752 16:43:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 143068' 00:25:46.752 killing process with pid 143068 00:25:46.752 16:43:17 -- common/autotest_common.sh@945 -- # kill 143068 00:25:46.752 16:43:17 -- common/autotest_common.sh@950 -- # wait 143068 00:25:47.011 16:43:18 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:25:47.011 16:43:18 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:25:47.011 16:43:18 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:25:47.011 16:43:18 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.011 16:43:18 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:25:47.011 16:43:18 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=143208 00:25:47.011 16:43:18 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:25:47.011 16:43:18 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:47.011 16:43:18 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 143208 /var/tmp/spdk.sock 00:25:47.011 16:43:18 -- common/autotest_common.sh@819 -- # '[' -z 143208 ']' 00:25:47.011 16:43:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.011 16:43:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:47.011 16:43:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.011 16:43:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:47.011 16:43:18 -- common/autotest_common.sh@10 -- # set +x 00:25:47.011 [2024-07-13 16:43:18.476650] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:47.011 [2024-07-13 16:43:18.477042] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143208 ] 00:25:47.271 [2024-07-13 16:43:18.630263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:47.271 [2024-07-13 16:43:18.705571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.271 [2024-07-13 16:43:18.705732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.271 [2024-07-13 16:43:18.705731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.530 [2024-07-13 16:43:18.819435] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:48.097 16:43:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:48.097 16:43:19 -- common/autotest_common.sh@852 -- # return 0 00:25:48.097 16:43:19 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:25:48.097 16:43:19 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:48.355 Malloc0 00:25:48.355 Malloc1 00:25:48.355 Malloc2 00:25:48.355 16:43:19 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:25:48.355 16:43:19 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:25:48.355 16:43:19 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:25:48.355 16:43:19 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:25:48.355 5000+0 records in 00:25:48.355 5000+0 records out 00:25:48.355 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0286532 s, 357 MB/s 00:25:48.355 16:43:19 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:25:48.614 AIO0 00:25:48.614 16:43:20 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 143208 00:25:48.614 16:43:20 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 143208 00:25:48.614 16:43:20 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=143208 00:25:48.614 16:43:20 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:25:48.614 16:43:20 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:25:48.614 16:43:20 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:25:48.614 16:43:20 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:25:48.614 16:43:20 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:25:48.614 16:43:20 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:25:48.614 16:43:20 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:48.614 16:43:20 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:25:48.614 16:43:20 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:48.874 16:43:20 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:25:48.874 16:43:20 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:25:48.874 16:43:20 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:25:48.874 16:43:20 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:25:48.874 16:43:20 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:25:48.874 16:43:20 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:25:48.874 16:43:20 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:48.874 16:43:20 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:25:48.874 16:43:20 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:25:49.133 16:43:20 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:25:49.133 16:43:20 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:25:49.133 16:43:20 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:25:49.133 spdk_thread ids are 1 on reactor0. 00:25:49.133 16:43:20 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:49.133 16:43:20 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143208 0 00:25:49.133 16:43:20 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143208 0 idle 00:25:49.133 16:43:20 -- interrupt/interrupt_common.sh@33 -- # local pid=143208 00:25:49.133 16:43:20 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:49.133 16:43:20 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:49.133 16:43:20 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:49.133 16:43:20 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:49.133 16:43:20 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:49.133 16:43:20 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:49.133 16:43:20 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:49.133 16:43:20 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143208 -w 256 00:25:49.133 16:43:20 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143208 root 20 0 20.1t 58064 26032 S 0.0 0.5 0:00.36 reactor_0' 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@48 -- # echo 143208 root 20 0 20.1t 58064 26032 S 0.0 0.5 0:00.36 reactor_0 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:49.392 16:43:20 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:49.392 16:43:20 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143208 1 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143208 1 idle 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@33 -- # local pid=143208 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143208 -w 256 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143217 root 20 0 20.1t 58064 26032 S 0.0 0.5 0:00.00 reactor_1' 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@48 -- # echo 143217 root 20 0 20.1t 58064 26032 S 0.0 0.5 0:00.00 reactor_1 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:49.392 16:43:20 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:49.393 16:43:20 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:25:49.393 16:43:20 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143208 2 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143208 2 idle 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@33 -- # local pid=143208 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143208 -w 256 00:25:49.393 16:43:20 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:49.652 16:43:20 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143219 root 20 0 20.1t 58064 26032 S 0.0 0.5 0:00.00 reactor_2' 00:25:49.652 16:43:20 -- interrupt/interrupt_common.sh@48 -- # echo 143219 root 20 0 20.1t 58064 26032 S 0.0 0.5 0:00.00 reactor_2 00:25:49.652 16:43:20 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:49.652 16:43:20 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:49.652 16:43:20 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:49.652 16:43:20 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:49.652 16:43:20 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:49.652 16:43:20 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:49.652 16:43:21 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:49.652 16:43:21 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:49.652 16:43:21 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:25:49.652 16:43:21 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:25:49.910 [2024-07-13 16:43:21.251851] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:25:49.910 [2024-07-13 16:43:21.252596] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:25:49.910 [2024-07-13 16:43:21.253050] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:49.910 16:43:21 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:25:50.169 [2024-07-13 16:43:21.496582] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:25:50.169 [2024-07-13 16:43:21.497328] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:50.169 16:43:21 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:25:50.169 16:43:21 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143208 0 00:25:50.169 16:43:21 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143208 0 busy 00:25:50.169 16:43:21 -- interrupt/interrupt_common.sh@33 -- # local pid=143208 00:25:50.169 16:43:21 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:50.169 16:43:21 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:25:50.169 16:43:21 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:25:50.169 16:43:21 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:50.169 16:43:21 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:50.169 16:43:21 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:50.169 16:43:21 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143208 -w 256 00:25:50.169 16:43:21 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143208 root 20 0 20.1t 58164 26032 R 99.9 0.5 0:00.79 reactor_0' 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@48 -- # echo 143208 root 20 0 20.1t 58164 26032 R 99.9 0.5 0:00.79 reactor_0 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:50.428 16:43:21 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:25:50.428 16:43:21 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143208 2 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143208 2 busy 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@33 -- # local pid=143208 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143208 -w 256 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143219 root 20 0 20.1t 58164 26032 R 99.9 0.5 0:00.36 reactor_2' 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@48 -- # echo 143219 root 20 0 20.1t 58164 26032 R 99.9 0.5 0:00.36 reactor_2 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:25:50.428 16:43:21 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:50.428 16:43:21 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:25:50.688 [2024-07-13 16:43:22.116724] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:25:50.688 [2024-07-13 16:43:22.117206] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:50.688 16:43:22 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:25:50.688 16:43:22 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 143208 2 00:25:50.688 16:43:22 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143208 2 idle 00:25:50.688 16:43:22 -- interrupt/interrupt_common.sh@33 -- # local pid=143208 00:25:50.688 16:43:22 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:25:50.688 16:43:22 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:50.688 16:43:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:50.688 16:43:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:50.688 16:43:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:50.688 16:43:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:50.688 16:43:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:50.688 16:43:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143208 -w 256 00:25:50.688 16:43:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:25:50.947 16:43:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143219 root 20 0 20.1t 58224 26032 S 0.0 0.5 0:00.62 reactor_2' 00:25:50.947 16:43:22 -- interrupt/interrupt_common.sh@48 -- # echo 143219 root 20 0 20.1t 58224 26032 S 0.0 0.5 0:00.62 reactor_2 00:25:50.947 16:43:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:50.947 16:43:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:50.947 16:43:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:50.947 16:43:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:50.947 16:43:22 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:50.947 16:43:22 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:50.947 16:43:22 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:50.947 16:43:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:50.947 16:43:22 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:25:51.206 [2024-07-13 16:43:22.548738] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:25:51.206 [2024-07-13 16:43:22.549453] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:25:51.206 [2024-07-13 16:43:22.549607] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:25:51.206 16:43:22 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:25:51.206 16:43:22 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 143208 0 00:25:51.206 16:43:22 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143208 0 idle 00:25:51.206 16:43:22 -- interrupt/interrupt_common.sh@33 -- # local pid=143208 00:25:51.206 16:43:22 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:25:51.206 16:43:22 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:25:51.206 16:43:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:25:51.206 16:43:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:25:51.206 16:43:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:25:51.206 16:43:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:25:51.206 16:43:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:25:51.207 16:43:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143208 -w 256 00:25:51.207 16:43:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:25:51.466 16:43:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143208 root 20 0 20.1t 58276 26032 S 0.0 0.5 0:01.66 reactor_0' 00:25:51.466 16:43:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:25:51.466 16:43:22 -- interrupt/interrupt_common.sh@48 -- # echo 143208 root 20 0 20.1t 58276 26032 S 0.0 0.5 0:01.66 reactor_0 00:25:51.466 16:43:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:25:51.466 16:43:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:25:51.466 16:43:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:25:51.466 16:43:22 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:25:51.466 16:43:22 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:25:51.466 16:43:22 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:25:51.466 16:43:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:25:51.466 16:43:22 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:25:51.466 16:43:22 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:25:51.466 16:43:22 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:25:51.466 16:43:22 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 143208 00:25:51.466 16:43:22 -- common/autotest_common.sh@926 -- # '[' -z 143208 ']' 00:25:51.466 16:43:22 -- common/autotest_common.sh@930 -- # kill -0 143208 00:25:51.466 16:43:22 -- common/autotest_common.sh@931 -- # uname 00:25:51.466 16:43:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:51.466 16:43:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 143208 00:25:51.466 16:43:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:51.466 16:43:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:51.466 16:43:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 143208' 00:25:51.466 killing process with pid 143208 00:25:51.466 16:43:22 -- common/autotest_common.sh@945 -- # kill 143208 00:25:51.466 16:43:22 -- common/autotest_common.sh@950 -- # wait 143208 00:25:52.033 16:43:23 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:25:52.033 16:43:23 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:25:52.033 ************************************ 00:25:52.033 END TEST reactor_set_interrupt 00:25:52.033 ************************************ 00:25:52.033 00:25:52.033 real 0m10.282s 00:25:52.033 user 0m9.708s 00:25:52.033 sys 0m2.062s 00:25:52.033 16:43:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.033 16:43:23 -- common/autotest_common.sh@10 -- # set +x 00:25:52.033 16:43:23 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:25:52.033 16:43:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:52.033 16:43:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:52.033 16:43:23 -- common/autotest_common.sh@10 -- # set +x 00:25:52.033 ************************************ 00:25:52.033 START TEST reap_unregistered_poller 00:25:52.033 ************************************ 00:25:52.033 16:43:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:25:52.033 * Looking for test storage... 00:25:52.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:52.033 16:43:23 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:25:52.033 16:43:23 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:25:52.033 16:43:23 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:52.033 16:43:23 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:52.033 16:43:23 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:25:52.033 16:43:23 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:52.033 16:43:23 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:25:52.033 16:43:23 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:25:52.033 16:43:23 -- common/autotest_common.sh@34 -- # set -e 00:25:52.033 16:43:23 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:25:52.033 16:43:23 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:25:52.033 16:43:23 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:25:52.033 16:43:23 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:25:52.033 16:43:23 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:25:52.033 16:43:23 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:25:52.033 16:43:23 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:25:52.033 16:43:23 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:25:52.033 16:43:23 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:25:52.033 16:43:23 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:25:52.033 16:43:23 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:25:52.033 16:43:23 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:25:52.033 16:43:23 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:25:52.033 16:43:23 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:25:52.033 16:43:23 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:25:52.033 16:43:23 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:25:52.033 16:43:23 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:25:52.033 16:43:23 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:25:52.033 16:43:23 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:25:52.033 16:43:23 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:25:52.033 16:43:23 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:25:52.033 16:43:23 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:25:52.033 16:43:23 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:52.033 16:43:23 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:25:52.033 16:43:23 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:25:52.033 16:43:23 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:25:52.033 16:43:23 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:25:52.033 16:43:23 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:25:52.033 16:43:23 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:25:52.033 16:43:23 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:25:52.033 16:43:23 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:25:52.033 16:43:23 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:25:52.033 16:43:23 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:25:52.033 16:43:23 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:25:52.033 16:43:23 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:25:52.033 16:43:23 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:25:52.033 16:43:23 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:25:52.033 16:43:23 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:25:52.033 16:43:23 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:25:52.033 16:43:23 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:25:52.033 16:43:23 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:25:52.033 16:43:23 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:25:52.033 16:43:23 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:25:52.033 16:43:23 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:25:52.033 16:43:23 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:25:52.033 16:43:23 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:25:52.033 16:43:23 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:25:52.033 16:43:23 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:25:52.033 16:43:23 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:25:52.033 16:43:23 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:25:52.033 16:43:23 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:25:52.033 16:43:23 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:25:52.033 16:43:23 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:25:52.033 16:43:23 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:25:52.033 16:43:23 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:25:52.033 16:43:23 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:25:52.033 16:43:23 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:25:52.033 16:43:23 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:25:52.033 16:43:23 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:25:52.033 16:43:23 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:25:52.033 16:43:23 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:25:52.033 16:43:23 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:25:52.033 16:43:23 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:25:52.033 16:43:23 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:25:52.033 16:43:23 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:25:52.033 16:43:23 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:25:52.033 16:43:23 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:25:52.033 16:43:23 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:25:52.033 16:43:23 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:25:52.033 16:43:23 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:25:52.033 16:43:23 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:25:52.033 16:43:23 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:25:52.033 16:43:23 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:25:52.033 16:43:23 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:25:52.033 16:43:23 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:25:52.033 16:43:23 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:25:52.033 16:43:23 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:25:52.033 16:43:23 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:25:52.033 16:43:23 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:25:52.033 16:43:23 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:25:52.033 16:43:23 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:25:52.033 16:43:23 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:25:52.033 16:43:23 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:25:52.033 16:43:23 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:25:52.033 16:43:23 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:25:52.033 16:43:23 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:25:52.033 16:43:23 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:25:52.033 16:43:23 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:25:52.033 16:43:23 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:25:52.033 16:43:23 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:25:52.033 16:43:23 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:25:52.033 16:43:23 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:25:52.033 16:43:23 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:25:52.033 16:43:23 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:25:52.033 16:43:23 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:25:52.033 16:43:23 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:25:52.033 16:43:23 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:25:52.034 16:43:23 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:25:52.034 16:43:23 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:25:52.034 #define SPDK_CONFIG_H 00:25:52.034 #define SPDK_CONFIG_APPS 1 00:25:52.034 #define SPDK_CONFIG_ARCH native 00:25:52.034 #define SPDK_CONFIG_ASAN 1 00:25:52.034 #undef SPDK_CONFIG_AVAHI 00:25:52.034 #undef SPDK_CONFIG_CET 00:25:52.034 #define SPDK_CONFIG_COVERAGE 1 00:25:52.034 #define SPDK_CONFIG_CROSS_PREFIX 00:25:52.034 #undef SPDK_CONFIG_CRYPTO 00:25:52.034 #undef SPDK_CONFIG_CRYPTO_MLX5 00:25:52.034 #undef SPDK_CONFIG_CUSTOMOCF 00:25:52.034 #undef SPDK_CONFIG_DAOS 00:25:52.034 #define SPDK_CONFIG_DAOS_DIR 00:25:52.034 #define SPDK_CONFIG_DEBUG 1 00:25:52.034 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:25:52.034 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:25:52.034 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:25:52.034 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:25:52.034 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:25:52.034 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:52.034 #define SPDK_CONFIG_EXAMPLES 1 00:25:52.034 #undef SPDK_CONFIG_FC 00:25:52.034 #define SPDK_CONFIG_FC_PATH 00:25:52.034 #define SPDK_CONFIG_FIO_PLUGIN 1 00:25:52.034 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:25:52.034 #undef SPDK_CONFIG_FUSE 00:25:52.034 #undef SPDK_CONFIG_FUZZER 00:25:52.034 #define SPDK_CONFIG_FUZZER_LIB 00:25:52.034 #undef SPDK_CONFIG_GOLANG 00:25:52.034 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:25:52.034 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:25:52.034 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:25:52.034 #undef SPDK_CONFIG_HAVE_LIBBSD 00:25:52.034 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:25:52.034 #define SPDK_CONFIG_IDXD 1 00:25:52.034 #undef SPDK_CONFIG_IDXD_KERNEL 00:25:52.034 #undef SPDK_CONFIG_IPSEC_MB 00:25:52.034 #define SPDK_CONFIG_IPSEC_MB_DIR 00:25:52.034 #define SPDK_CONFIG_ISAL 1 00:25:52.034 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:25:52.034 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:25:52.034 #define SPDK_CONFIG_LIBDIR 00:25:52.034 #undef SPDK_CONFIG_LTO 00:25:52.034 #define SPDK_CONFIG_MAX_LCORES 00:25:52.034 #define SPDK_CONFIG_NVME_CUSE 1 00:25:52.034 #undef SPDK_CONFIG_OCF 00:25:52.034 #define SPDK_CONFIG_OCF_PATH 00:25:52.034 #define SPDK_CONFIG_OPENSSL_PATH 00:25:52.034 #undef SPDK_CONFIG_PGO_CAPTURE 00:25:52.034 #undef SPDK_CONFIG_PGO_USE 00:25:52.034 #define SPDK_CONFIG_PREFIX /usr/local 00:25:52.034 #define SPDK_CONFIG_RAID5F 1 00:25:52.034 #undef SPDK_CONFIG_RBD 00:25:52.034 #define SPDK_CONFIG_RDMA 1 00:25:52.034 #define SPDK_CONFIG_RDMA_PROV verbs 00:25:52.034 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:25:52.034 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:25:52.034 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:25:52.034 #undef SPDK_CONFIG_SHARED 00:25:52.034 #undef SPDK_CONFIG_SMA 00:25:52.034 #define SPDK_CONFIG_TESTS 1 00:25:52.034 #undef SPDK_CONFIG_TSAN 00:25:52.034 #undef SPDK_CONFIG_UBLK 00:25:52.034 #define SPDK_CONFIG_UBSAN 1 00:25:52.034 #define SPDK_CONFIG_UNIT_TESTS 1 00:25:52.034 #undef SPDK_CONFIG_URING 00:25:52.034 #define SPDK_CONFIG_URING_PATH 00:25:52.034 #undef SPDK_CONFIG_URING_ZNS 00:25:52.034 #undef SPDK_CONFIG_USDT 00:25:52.034 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:25:52.034 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:25:52.034 #undef SPDK_CONFIG_VFIO_USER 00:25:52.034 #define SPDK_CONFIG_VFIO_USER_DIR 00:25:52.034 #define SPDK_CONFIG_VHOST 1 00:25:52.034 #define SPDK_CONFIG_VIRTIO 1 00:25:52.034 #undef SPDK_CONFIG_VTUNE 00:25:52.034 #define SPDK_CONFIG_VTUNE_DIR 00:25:52.034 #define SPDK_CONFIG_WERROR 1 00:25:52.034 #define SPDK_CONFIG_WPDK_DIR 00:25:52.034 #undef SPDK_CONFIG_XNVME 00:25:52.034 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:25:52.034 16:43:23 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:25:52.034 16:43:23 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:52.034 16:43:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.034 16:43:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.034 16:43:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.034 16:43:23 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:52.034 16:43:23 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:52.034 16:43:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:52.034 16:43:23 -- paths/export.sh@5 -- # export PATH 00:25:52.034 16:43:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:52.034 16:43:23 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:25:52.034 16:43:23 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:25:52.034 16:43:23 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:25:52.034 16:43:23 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:25:52.034 16:43:23 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:25:52.034 16:43:23 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:25:52.034 16:43:23 -- pm/common@16 -- # TEST_TAG=N/A 00:25:52.034 16:43:23 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:25:52.034 16:43:23 -- common/autotest_common.sh@52 -- # : 1 00:25:52.034 16:43:23 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:25:52.034 16:43:23 -- common/autotest_common.sh@56 -- # : 0 00:25:52.034 16:43:23 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:25:52.034 16:43:23 -- common/autotest_common.sh@58 -- # : 0 00:25:52.034 16:43:23 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:25:52.034 16:43:23 -- common/autotest_common.sh@60 -- # : 1 00:25:52.034 16:43:23 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:25:52.034 16:43:23 -- common/autotest_common.sh@62 -- # : 1 00:25:52.034 16:43:23 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:25:52.034 16:43:23 -- common/autotest_common.sh@64 -- # : 00:25:52.034 16:43:23 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:25:52.034 16:43:23 -- common/autotest_common.sh@66 -- # : 0 00:25:52.034 16:43:23 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:25:52.034 16:43:23 -- common/autotest_common.sh@68 -- # : 0 00:25:52.034 16:43:23 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:25:52.034 16:43:23 -- common/autotest_common.sh@70 -- # : 0 00:25:52.034 16:43:23 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:25:52.035 16:43:23 -- common/autotest_common.sh@72 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:25:52.035 16:43:23 -- common/autotest_common.sh@74 -- # : 1 00:25:52.035 16:43:23 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:25:52.035 16:43:23 -- common/autotest_common.sh@76 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:25:52.035 16:43:23 -- common/autotest_common.sh@78 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:25:52.035 16:43:23 -- common/autotest_common.sh@80 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:25:52.035 16:43:23 -- common/autotest_common.sh@82 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:25:52.035 16:43:23 -- common/autotest_common.sh@84 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:25:52.035 16:43:23 -- common/autotest_common.sh@86 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:25:52.035 16:43:23 -- common/autotest_common.sh@88 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:25:52.035 16:43:23 -- common/autotest_common.sh@90 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:25:52.035 16:43:23 -- common/autotest_common.sh@92 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:25:52.035 16:43:23 -- common/autotest_common.sh@94 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:25:52.035 16:43:23 -- common/autotest_common.sh@96 -- # : rdma 00:25:52.035 16:43:23 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:25:52.035 16:43:23 -- common/autotest_common.sh@98 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:25:52.035 16:43:23 -- common/autotest_common.sh@100 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:25:52.035 16:43:23 -- common/autotest_common.sh@102 -- # : 1 00:25:52.035 16:43:23 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:25:52.035 16:43:23 -- common/autotest_common.sh@104 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:25:52.035 16:43:23 -- common/autotest_common.sh@106 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:25:52.035 16:43:23 -- common/autotest_common.sh@108 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:25:52.035 16:43:23 -- common/autotest_common.sh@110 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:25:52.035 16:43:23 -- common/autotest_common.sh@112 -- # : 0 00:25:52.035 16:43:23 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:25:52.035 16:43:23 -- common/autotest_common.sh@114 -- # : 1 00:25:52.035 16:43:23 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:25:52.035 16:43:23 -- common/autotest_common.sh@116 -- # : 1 00:25:52.035 16:43:23 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:25:52.294 16:43:23 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:25:52.294 16:43:23 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:25:52.294 16:43:23 -- common/autotest_common.sh@120 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:25:52.294 16:43:23 -- common/autotest_common.sh@122 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:25:52.294 16:43:23 -- common/autotest_common.sh@124 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:25:52.294 16:43:23 -- common/autotest_common.sh@126 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:25:52.294 16:43:23 -- common/autotest_common.sh@128 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:25:52.294 16:43:23 -- common/autotest_common.sh@130 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:25:52.294 16:43:23 -- common/autotest_common.sh@132 -- # : v22.11.4 00:25:52.294 16:43:23 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:25:52.294 16:43:23 -- common/autotest_common.sh@134 -- # : true 00:25:52.294 16:43:23 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:25:52.294 16:43:23 -- common/autotest_common.sh@136 -- # : 1 00:25:52.294 16:43:23 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:25:52.294 16:43:23 -- common/autotest_common.sh@138 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:25:52.294 16:43:23 -- common/autotest_common.sh@140 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:25:52.294 16:43:23 -- common/autotest_common.sh@142 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:25:52.294 16:43:23 -- common/autotest_common.sh@144 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:25:52.294 16:43:23 -- common/autotest_common.sh@146 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:25:52.294 16:43:23 -- common/autotest_common.sh@148 -- # : 00:25:52.294 16:43:23 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:25:52.294 16:43:23 -- common/autotest_common.sh@150 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:25:52.294 16:43:23 -- common/autotest_common.sh@152 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:25:52.294 16:43:23 -- common/autotest_common.sh@154 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:25:52.294 16:43:23 -- common/autotest_common.sh@156 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:25:52.294 16:43:23 -- common/autotest_common.sh@158 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:25:52.294 16:43:23 -- common/autotest_common.sh@160 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:25:52.294 16:43:23 -- common/autotest_common.sh@163 -- # : 00:25:52.294 16:43:23 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:25:52.294 16:43:23 -- common/autotest_common.sh@165 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:25:52.294 16:43:23 -- common/autotest_common.sh@167 -- # : 0 00:25:52.294 16:43:23 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:25:52.294 16:43:23 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:25:52.294 16:43:23 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:25:52.294 16:43:23 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:25:52.294 16:43:23 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:25:52.294 16:43:23 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:52.294 16:43:23 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:52.294 16:43:23 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:52.294 16:43:23 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:25:52.294 16:43:23 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:25:52.294 16:43:23 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:25:52.294 16:43:23 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:52.294 16:43:23 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:52.294 16:43:23 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:25:52.294 16:43:23 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:25:52.294 16:43:23 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:52.294 16:43:23 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:52.294 16:43:23 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:52.294 16:43:23 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:52.294 16:43:23 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:25:52.294 16:43:23 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:25:52.294 16:43:23 -- common/autotest_common.sh@196 -- # cat 00:25:52.294 16:43:23 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:25:52.294 16:43:23 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:52.294 16:43:23 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:52.294 16:43:23 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:52.294 16:43:23 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:52.294 16:43:23 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:25:52.294 16:43:23 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:25:52.294 16:43:23 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:25:52.294 16:43:23 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:25:52.294 16:43:23 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:25:52.294 16:43:23 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:25:52.294 16:43:23 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:25:52.294 16:43:23 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:25:52.294 16:43:23 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:25:52.294 16:43:23 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:25:52.294 16:43:23 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:25:52.294 16:43:23 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:25:52.294 16:43:23 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:52.294 16:43:23 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:52.294 16:43:23 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:25:52.294 16:43:23 -- common/autotest_common.sh@249 -- # export valgrind= 00:25:52.294 16:43:23 -- common/autotest_common.sh@249 -- # valgrind= 00:25:52.295 16:43:23 -- common/autotest_common.sh@255 -- # uname -s 00:25:52.295 16:43:23 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:25:52.295 16:43:23 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:25:52.295 16:43:23 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:25:52.295 16:43:23 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:25:52.295 16:43:23 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:25:52.295 16:43:23 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:25:52.295 16:43:23 -- common/autotest_common.sh@265 -- # MAKE=make 00:25:52.295 16:43:23 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:25:52.295 16:43:23 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:25:52.295 16:43:23 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:25:52.295 16:43:23 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:25:52.295 16:43:23 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:25:52.295 16:43:23 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:25:52.295 16:43:23 -- common/autotest_common.sh@309 -- # [[ -z 143378 ]] 00:25:52.295 16:43:23 -- common/autotest_common.sh@309 -- # kill -0 143378 00:25:52.295 16:43:23 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:25:52.295 16:43:23 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:25:52.295 16:43:23 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:25:52.295 16:43:23 -- common/autotest_common.sh@322 -- # local mount target_dir 00:25:52.295 16:43:23 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:25:52.295 16:43:23 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:25:52.295 16:43:23 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:25:52.295 16:43:23 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:25:52.295 16:43:23 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.waePm8 00:25:52.295 16:43:23 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:25:52.295 16:43:23 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:25:52.295 16:43:23 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:25:52.295 16:43:23 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.waePm8/tests/interrupt /tmp/spdk.waePm8 00:25:52.295 16:43:23 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:25:52.295 16:43:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:52.295 16:43:23 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:25:52.295 16:43:23 -- common/autotest_common.sh@318 -- # df -T 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248956416 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:25:52.295 16:43:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=4726784 00:25:52.295 16:43:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=9439932416 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:25:52.295 16:43:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=11160084480 00:25:52.295 16:43:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267146240 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268403712 00:25:52.295 16:43:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:25:52.295 16:43:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:25:52.295 16:43:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:25:52.295 16:43:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:25:52.295 16:43:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:25:52.295 16:43:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:25:52.295 16:43:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:25:52.295 16:43:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:25:52.295 16:43:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=97105018880 00:25:52.295 16:43:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:25:52.295 16:43:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=2597761024 00:25:52.295 16:43:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:25:52.295 16:43:23 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:25:52.295 * Looking for test storage... 00:25:52.295 16:43:23 -- common/autotest_common.sh@359 -- # local target_space new_size 00:25:52.295 16:43:23 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:25:52.295 16:43:23 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:25:52.295 16:43:23 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:52.295 16:43:23 -- common/autotest_common.sh@363 -- # mount=/ 00:25:52.295 16:43:23 -- common/autotest_common.sh@365 -- # target_space=9439932416 00:25:52.295 16:43:23 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:25:52.295 16:43:23 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:25:52.295 16:43:23 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:25:52.295 16:43:23 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:25:52.295 16:43:23 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:25:52.295 16:43:23 -- common/autotest_common.sh@372 -- # new_size=13374676992 00:25:52.295 16:43:23 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:25:52.295 16:43:23 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:52.295 16:43:23 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:25:52.295 16:43:23 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:52.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:25:52.295 16:43:23 -- common/autotest_common.sh@380 -- # return 0 00:25:52.295 16:43:23 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:25:52.295 16:43:23 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:25:52.295 16:43:23 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:25:52.295 16:43:23 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:25:52.295 16:43:23 -- common/autotest_common.sh@1672 -- # true 00:25:52.295 16:43:23 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:25:52.295 16:43:23 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:25:52.295 16:43:23 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:25:52.295 16:43:23 -- common/autotest_common.sh@27 -- # exec 00:25:52.295 16:43:23 -- common/autotest_common.sh@29 -- # exec 00:25:52.295 16:43:23 -- common/autotest_common.sh@31 -- # xtrace_restore 00:25:52.295 16:43:23 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:25:52.295 16:43:23 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:25:52.295 16:43:23 -- common/autotest_common.sh@18 -- # set -x 00:25:52.295 16:43:23 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:52.295 16:43:23 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:25:52.295 16:43:23 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:25:52.295 16:43:23 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:25:52.295 16:43:23 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:25:52.295 16:43:23 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:25:52.296 16:43:23 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:25:52.296 16:43:23 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:25:52.296 16:43:23 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:25:52.296 16:43:23 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.296 16:43:23 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:25:52.296 16:43:23 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=143418 00:25:52.296 16:43:23 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:52.296 16:43:23 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 143418 /var/tmp/spdk.sock 00:25:52.296 16:43:23 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:25:52.296 16:43:23 -- common/autotest_common.sh@819 -- # '[' -z 143418 ']' 00:25:52.296 16:43:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.296 16:43:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:52.296 16:43:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.296 16:43:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:52.296 16:43:23 -- common/autotest_common.sh@10 -- # set +x 00:25:52.296 [2024-07-13 16:43:23.640521] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:52.296 [2024-07-13 16:43:23.641012] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143418 ] 00:25:52.555 [2024-07-13 16:43:23.805212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:52.555 [2024-07-13 16:43:23.880841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.555 [2024-07-13 16:43:23.881025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.555 [2024-07-13 16:43:23.881025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.555 [2024-07-13 16:43:23.994104] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:53.569 16:43:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:53.569 16:43:24 -- common/autotest_common.sh@852 -- # return 0 00:25:53.569 16:43:24 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:25:53.569 16:43:24 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:25:53.569 16:43:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.569 16:43:24 -- common/autotest_common.sh@10 -- # set +x 00:25:53.569 16:43:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.569 16:43:24 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:25:53.569 "name": "app_thread", 00:25:53.569 "id": 1, 00:25:53.569 "active_pollers": [], 00:25:53.569 "timed_pollers": [ 00:25:53.569 { 00:25:53.569 "name": "rpc_subsystem_poll", 00:25:53.569 "id": 1, 00:25:53.569 "state": "waiting", 00:25:53.569 "run_count": 0, 00:25:53.569 "busy_count": 0, 00:25:53.569 "period_ticks": 8400000 00:25:53.569 } 00:25:53.569 ], 00:25:53.569 "paused_pollers": [] 00:25:53.569 }' 00:25:53.569 16:43:24 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:25:53.569 16:43:24 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:25:53.569 16:43:24 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:25:53.569 16:43:24 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:25:53.569 16:43:24 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:25:53.569 16:43:24 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:25:53.569 16:43:24 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:25:53.569 16:43:24 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:25:53.569 16:43:24 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:25:53.569 5000+0 records in 00:25:53.569 5000+0 records out 00:25:53.569 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0330583 s, 310 MB/s 00:25:53.569 16:43:24 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:25:53.569 AIO0 00:25:53.569 16:43:25 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:53.855 16:43:25 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:25:54.114 16:43:25 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:25:54.114 16:43:25 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:25:54.114 16:43:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.114 16:43:25 -- common/autotest_common.sh@10 -- # set +x 00:25:54.114 16:43:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.114 16:43:25 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:25:54.114 "name": "app_thread", 00:25:54.114 "id": 1, 00:25:54.114 "active_pollers": [], 00:25:54.114 "timed_pollers": [ 00:25:54.114 { 00:25:54.114 "name": "rpc_subsystem_poll", 00:25:54.114 "id": 1, 00:25:54.114 "state": "waiting", 00:25:54.114 "run_count": 0, 00:25:54.114 "busy_count": 0, 00:25:54.114 "period_ticks": 8400000 00:25:54.114 } 00:25:54.114 ], 00:25:54.114 "paused_pollers": [] 00:25:54.114 }' 00:25:54.114 16:43:25 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:25:54.114 16:43:25 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:25:54.114 16:43:25 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:25:54.114 16:43:25 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:25:54.114 16:43:25 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:25:54.114 16:43:25 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:25:54.114 16:43:25 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:25:54.114 16:43:25 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 143418 00:25:54.114 16:43:25 -- common/autotest_common.sh@926 -- # '[' -z 143418 ']' 00:25:54.114 16:43:25 -- common/autotest_common.sh@930 -- # kill -0 143418 00:25:54.114 16:43:25 -- common/autotest_common.sh@931 -- # uname 00:25:54.114 16:43:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:54.114 16:43:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 143418 00:25:54.114 16:43:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:54.114 16:43:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:54.114 16:43:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 143418' 00:25:54.114 killing process with pid 143418 00:25:54.114 16:43:25 -- common/autotest_common.sh@945 -- # kill 143418 00:25:54.114 16:43:25 -- common/autotest_common.sh@950 -- # wait 143418 00:25:54.683 16:43:25 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:25:54.683 16:43:25 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:25:54.683 ************************************ 00:25:54.683 END TEST reap_unregistered_poller 00:25:54.683 ************************************ 00:25:54.683 00:25:54.683 real 0m2.674s 00:25:54.683 user 0m1.624s 00:25:54.683 sys 0m0.742s 00:25:54.683 16:43:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:54.683 16:43:25 -- common/autotest_common.sh@10 -- # set +x 00:25:54.683 16:43:26 -- spdk/autotest.sh@204 -- # uname -s 00:25:54.683 16:43:26 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:25:54.683 16:43:26 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:25:54.683 16:43:26 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:25:54.683 16:43:26 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:25:54.683 16:43:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:54.683 16:43:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:54.683 16:43:26 -- common/autotest_common.sh@10 -- # set +x 00:25:54.683 ************************************ 00:25:54.683 START TEST spdk_dd 00:25:54.683 ************************************ 00:25:54.683 16:43:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:25:54.683 * Looking for test storage... 00:25:54.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:54.942 16:43:26 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:54.942 16:43:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.942 16:43:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.942 16:43:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.942 16:43:26 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:54.943 16:43:26 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:54.943 16:43:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:54.943 16:43:26 -- paths/export.sh@5 -- # export PATH 00:25:54.943 16:43:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:54.943 16:43:26 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:55.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:25:55.202 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:57.105 16:43:28 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:25:57.105 16:43:28 -- dd/dd.sh@11 -- # nvme_in_userspace 00:25:57.105 16:43:28 -- scripts/common.sh@311 -- # local bdf bdfs 00:25:57.105 16:43:28 -- scripts/common.sh@312 -- # local nvmes 00:25:57.105 16:43:28 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:25:57.105 16:43:28 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:57.105 16:43:28 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:25:57.105 16:43:28 -- scripts/common.sh@297 -- # local bdf= 00:25:57.105 16:43:28 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:25:57.105 16:43:28 -- scripts/common.sh@232 -- # local class 00:25:57.105 16:43:28 -- scripts/common.sh@233 -- # local subclass 00:25:57.105 16:43:28 -- scripts/common.sh@234 -- # local progif 00:25:57.105 16:43:28 -- scripts/common.sh@235 -- # printf %02x 1 00:25:57.105 16:43:28 -- scripts/common.sh@235 -- # class=01 00:25:57.105 16:43:28 -- scripts/common.sh@236 -- # printf %02x 8 00:25:57.105 16:43:28 -- scripts/common.sh@236 -- # subclass=08 00:25:57.105 16:43:28 -- scripts/common.sh@237 -- # printf %02x 2 00:25:57.105 16:43:28 -- scripts/common.sh@237 -- # progif=02 00:25:57.105 16:43:28 -- scripts/common.sh@239 -- # hash lspci 00:25:57.105 16:43:28 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:25:57.105 16:43:28 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:25:57.105 16:43:28 -- scripts/common.sh@242 -- # grep -i -- -p02 00:25:57.105 16:43:28 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:57.105 16:43:28 -- scripts/common.sh@244 -- # tr -d '"' 00:25:57.105 16:43:28 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:57.105 16:43:28 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:25:57.105 16:43:28 -- scripts/common.sh@15 -- # local i 00:25:57.105 16:43:28 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:25:57.105 16:43:28 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:25:57.105 16:43:28 -- scripts/common.sh@24 -- # return 0 00:25:57.105 16:43:28 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:25:57.105 16:43:28 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:25:57.105 16:43:28 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:25:57.105 16:43:28 -- scripts/common.sh@322 -- # uname -s 00:25:57.105 16:43:28 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:25:57.105 16:43:28 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:25:57.105 16:43:28 -- scripts/common.sh@327 -- # (( 1 )) 00:25:57.105 16:43:28 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:25:57.105 16:43:28 -- dd/dd.sh@13 -- # check_liburing 00:25:57.105 16:43:28 -- dd/common.sh@139 -- # local lib so 00:25:57.105 16:43:28 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:25:57.105 16:43:28 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:25:57.105 16:43:28 -- dd/common.sh@142 -- # read -r lib _ so _ 00:25:57.105 16:43:28 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:25:57.105 16:43:28 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:25:57.105 16:43:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:57.105 16:43:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:57.105 16:43:28 -- common/autotest_common.sh@10 -- # set +x 00:25:57.105 ************************************ 00:25:57.105 START TEST spdk_dd_basic_rw 00:25:57.105 ************************************ 00:25:57.105 16:43:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:25:57.105 * Looking for test storage... 00:25:57.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:57.105 16:43:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:57.105 16:43:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.105 16:43:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.105 16:43:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.105 16:43:28 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:57.105 16:43:28 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:57.105 16:43:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:57.105 16:43:28 -- paths/export.sh@5 -- # export PATH 00:25:57.105 16:43:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:57.105 16:43:28 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:25:57.105 16:43:28 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:25:57.105 16:43:28 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:25:57.106 16:43:28 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:25:57.106 16:43:28 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:25:57.106 16:43:28 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:25:57.106 16:43:28 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:25:57.106 16:43:28 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:57.106 16:43:28 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:57.106 16:43:28 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:25:57.106 16:43:28 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:25:57.106 16:43:28 -- dd/common.sh@126 -- # mapfile -t id 00:25:57.106 16:43:28 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:25:57.366 16:43:28 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2250 Host Write Commands: 111 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:25:57.366 16:43:28 -- dd/common.sh@130 -- # lbaf=04 00:25:57.367 16:43:28 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2250 Host Write Commands: 111 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:25:57.367 16:43:28 -- dd/common.sh@132 -- # lbaf=4096 00:25:57.367 16:43:28 -- dd/common.sh@134 -- # echo 4096 00:25:57.367 16:43:28 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:25:57.367 16:43:28 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:57.367 16:43:28 -- dd/basic_rw.sh@96 -- # gen_conf 00:25:57.367 16:43:28 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:25:57.367 16:43:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:57.367 16:43:28 -- common/autotest_common.sh@10 -- # set +x 00:25:57.367 16:43:28 -- dd/basic_rw.sh@96 -- # : 00:25:57.367 16:43:28 -- dd/common.sh@31 -- # xtrace_disable 00:25:57.367 16:43:28 -- common/autotest_common.sh@10 -- # set +x 00:25:57.367 ************************************ 00:25:57.367 START TEST dd_bs_lt_native_bs 00:25:57.367 ************************************ 00:25:57.367 16:43:28 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:57.367 16:43:28 -- common/autotest_common.sh@640 -- # local es=0 00:25:57.367 16:43:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:57.367 16:43:28 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.367 16:43:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:57.367 16:43:28 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.367 16:43:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:57.367 16:43:28 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.367 16:43:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:57.367 16:43:28 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.367 16:43:28 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:57.367 16:43:28 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:25:57.626 { 00:25:57.626 "subsystems": [ 00:25:57.626 { 00:25:57.626 "subsystem": "bdev", 00:25:57.626 "config": [ 00:25:57.626 { 00:25:57.626 "params": { 00:25:57.626 "trtype": "pcie", 00:25:57.626 "traddr": "0000:00:06.0", 00:25:57.626 "name": "Nvme0" 00:25:57.626 }, 00:25:57.626 "method": "bdev_nvme_attach_controller" 00:25:57.626 }, 00:25:57.626 { 00:25:57.626 "method": "bdev_wait_for_examine" 00:25:57.626 } 00:25:57.626 ] 00:25:57.626 } 00:25:57.626 ] 00:25:57.626 } 00:25:57.626 [2024-07-13 16:43:28.906718] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:57.626 [2024-07-13 16:43:28.907166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143727 ] 00:25:57.626 [2024-07-13 16:43:29.069617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.884 [2024-07-13 16:43:29.156791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.143 [2024-07-13 16:43:29.357559] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:25:58.143 [2024-07-13 16:43:29.357869] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:58.143 [2024-07-13 16:43:29.552277] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:58.401 16:43:29 -- common/autotest_common.sh@643 -- # es=234 00:25:58.401 16:43:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:58.401 16:43:29 -- common/autotest_common.sh@652 -- # es=106 00:25:58.401 16:43:29 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:58.401 16:43:29 -- common/autotest_common.sh@660 -- # es=1 00:25:58.401 ************************************ 00:25:58.401 END TEST dd_bs_lt_native_bs 00:25:58.401 ************************************ 00:25:58.401 16:43:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:58.401 00:25:58.401 real 0m0.952s 00:25:58.401 user 0m0.578s 00:25:58.401 sys 0m0.330s 00:25:58.401 16:43:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:58.401 16:43:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.401 16:43:29 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:25:58.401 16:43:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:58.401 16:43:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:58.401 16:43:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.401 ************************************ 00:25:58.401 START TEST dd_rw 00:25:58.401 ************************************ 00:25:58.401 16:43:29 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:25:58.401 16:43:29 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:25:58.401 16:43:29 -- dd/basic_rw.sh@12 -- # local count size 00:25:58.401 16:43:29 -- dd/basic_rw.sh@13 -- # local qds bss 00:25:58.401 16:43:29 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:25:58.401 16:43:29 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:58.401 16:43:29 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:58.401 16:43:29 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:58.401 16:43:29 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:58.401 16:43:29 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:25:58.401 16:43:29 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:25:58.401 16:43:29 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:58.401 16:43:29 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:58.401 16:43:29 -- dd/basic_rw.sh@23 -- # count=15 00:25:58.401 16:43:29 -- dd/basic_rw.sh@24 -- # count=15 00:25:58.401 16:43:29 -- dd/basic_rw.sh@25 -- # size=61440 00:25:58.401 16:43:29 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:25:58.401 16:43:29 -- dd/common.sh@98 -- # xtrace_disable 00:25:58.401 16:43:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.966 16:43:30 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:25:58.966 16:43:30 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:58.966 16:43:30 -- dd/common.sh@31 -- # xtrace_disable 00:25:58.966 16:43:30 -- common/autotest_common.sh@10 -- # set +x 00:25:58.966 { 00:25:58.966 "subsystems": [ 00:25:58.966 { 00:25:58.966 "subsystem": "bdev", 00:25:58.966 "config": [ 00:25:58.966 { 00:25:58.966 "params": { 00:25:58.966 "trtype": "pcie", 00:25:58.966 "traddr": "0000:00:06.0", 00:25:58.966 "name": "Nvme0" 00:25:58.966 }, 00:25:58.966 "method": "bdev_nvme_attach_controller" 00:25:58.966 }, 00:25:58.966 { 00:25:58.966 "method": "bdev_wait_for_examine" 00:25:58.966 } 00:25:58.966 ] 00:25:58.966 } 00:25:58.966 ] 00:25:58.966 } 00:25:59.223 [2024-07-13 16:43:30.442616] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:59.223 [2024-07-13 16:43:30.443072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143775 ] 00:25:59.223 [2024-07-13 16:43:30.599295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.223 [2024-07-13 16:43:30.669525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.048  Copying: 60/60 [kB] (average 19 MBps) 00:26:00.048 00:26:00.048 16:43:31 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:26:00.048 16:43:31 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:00.048 16:43:31 -- dd/common.sh@31 -- # xtrace_disable 00:26:00.048 16:43:31 -- common/autotest_common.sh@10 -- # set +x 00:26:00.048 { 00:26:00.048 "subsystems": [ 00:26:00.048 { 00:26:00.048 "subsystem": "bdev", 00:26:00.048 "config": [ 00:26:00.048 { 00:26:00.048 "params": { 00:26:00.048 "trtype": "pcie", 00:26:00.048 "traddr": "0000:00:06.0", 00:26:00.048 "name": "Nvme0" 00:26:00.048 }, 00:26:00.048 "method": "bdev_nvme_attach_controller" 00:26:00.048 }, 00:26:00.048 { 00:26:00.048 "method": "bdev_wait_for_examine" 00:26:00.048 } 00:26:00.048 ] 00:26:00.048 } 00:26:00.048 ] 00:26:00.048 } 00:26:00.048 [2024-07-13 16:43:31.358578] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:00.048 [2024-07-13 16:43:31.359015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143797 ] 00:26:00.048 [2024-07-13 16:43:31.515228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.305 [2024-07-13 16:43:31.590032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.824  Copying: 60/60 [kB] (average 19 MBps) 00:26:00.824 00:26:00.824 16:43:32 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:00.824 16:43:32 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:26:00.824 16:43:32 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:00.824 16:43:32 -- dd/common.sh@11 -- # local nvme_ref= 00:26:00.824 16:43:32 -- dd/common.sh@12 -- # local size=61440 00:26:00.824 16:43:32 -- dd/common.sh@14 -- # local bs=1048576 00:26:00.824 16:43:32 -- dd/common.sh@15 -- # local count=1 00:26:00.824 16:43:32 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:00.824 16:43:32 -- dd/common.sh@18 -- # gen_conf 00:26:00.824 16:43:32 -- dd/common.sh@31 -- # xtrace_disable 00:26:00.824 16:43:32 -- common/autotest_common.sh@10 -- # set +x 00:26:00.824 { 00:26:00.824 "subsystems": [ 00:26:00.824 { 00:26:00.824 "subsystem": "bdev", 00:26:00.824 "config": [ 00:26:00.824 { 00:26:00.824 "params": { 00:26:00.824 "trtype": "pcie", 00:26:00.824 "traddr": "0000:00:06.0", 00:26:00.824 "name": "Nvme0" 00:26:00.824 }, 00:26:00.824 "method": "bdev_nvme_attach_controller" 00:26:00.824 }, 00:26:00.824 { 00:26:00.824 "method": "bdev_wait_for_examine" 00:26:00.824 } 00:26:00.824 ] 00:26:00.824 } 00:26:00.824 ] 00:26:00.824 } 00:26:00.824 [2024-07-13 16:43:32.285168] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:00.824 [2024-07-13 16:43:32.285451] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143818 ] 00:26:01.082 [2024-07-13 16:43:32.428902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.082 [2024-07-13 16:43:32.504584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.906  Copying: 1024/1024 [kB] (average 333 MBps) 00:26:01.906 00:26:01.906 16:43:33 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:01.906 16:43:33 -- dd/basic_rw.sh@23 -- # count=15 00:26:01.906 16:43:33 -- dd/basic_rw.sh@24 -- # count=15 00:26:01.906 16:43:33 -- dd/basic_rw.sh@25 -- # size=61440 00:26:01.906 16:43:33 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:26:01.906 16:43:33 -- dd/common.sh@98 -- # xtrace_disable 00:26:01.906 16:43:33 -- common/autotest_common.sh@10 -- # set +x 00:26:02.470 16:43:33 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:26:02.470 16:43:33 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:02.470 16:43:33 -- dd/common.sh@31 -- # xtrace_disable 00:26:02.470 16:43:33 -- common/autotest_common.sh@10 -- # set +x 00:26:02.470 { 00:26:02.470 "subsystems": [ 00:26:02.470 { 00:26:02.470 "subsystem": "bdev", 00:26:02.470 "config": [ 00:26:02.470 { 00:26:02.470 "params": { 00:26:02.470 "trtype": "pcie", 00:26:02.470 "traddr": "0000:00:06.0", 00:26:02.471 "name": "Nvme0" 00:26:02.471 }, 00:26:02.471 "method": "bdev_nvme_attach_controller" 00:26:02.471 }, 00:26:02.471 { 00:26:02.471 "method": "bdev_wait_for_examine" 00:26:02.471 } 00:26:02.471 ] 00:26:02.471 } 00:26:02.471 ] 00:26:02.471 } 00:26:02.471 [2024-07-13 16:43:33.767772] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:02.471 [2024-07-13 16:43:33.768188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143840 ] 00:26:02.471 [2024-07-13 16:43:33.913332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.728 [2024-07-13 16:43:33.987506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.295  Copying: 60/60 [kB] (average 58 MBps) 00:26:03.295 00:26:03.295 16:43:34 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:03.295 16:43:34 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:26:03.295 16:43:34 -- dd/common.sh@31 -- # xtrace_disable 00:26:03.295 16:43:34 -- common/autotest_common.sh@10 -- # set +x 00:26:03.295 { 00:26:03.295 "subsystems": [ 00:26:03.295 { 00:26:03.295 "subsystem": "bdev", 00:26:03.295 "config": [ 00:26:03.295 { 00:26:03.295 "params": { 00:26:03.295 "trtype": "pcie", 00:26:03.295 "traddr": "0000:00:06.0", 00:26:03.295 "name": "Nvme0" 00:26:03.295 }, 00:26:03.295 "method": "bdev_nvme_attach_controller" 00:26:03.295 }, 00:26:03.295 { 00:26:03.295 "method": "bdev_wait_for_examine" 00:26:03.295 } 00:26:03.295 ] 00:26:03.295 } 00:26:03.295 ] 00:26:03.295 } 00:26:03.295 [2024-07-13 16:43:34.678173] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:03.295 [2024-07-13 16:43:34.678652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143860 ] 00:26:03.553 [2024-07-13 16:43:34.835904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.553 [2024-07-13 16:43:34.911447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.071  Copying: 60/60 [kB] (average 58 MBps) 00:26:04.071 00:26:04.331 16:43:35 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:04.331 16:43:35 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:26:04.331 16:43:35 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:04.331 16:43:35 -- dd/common.sh@11 -- # local nvme_ref= 00:26:04.332 16:43:35 -- dd/common.sh@12 -- # local size=61440 00:26:04.332 16:43:35 -- dd/common.sh@14 -- # local bs=1048576 00:26:04.332 16:43:35 -- dd/common.sh@15 -- # local count=1 00:26:04.332 16:43:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:04.332 16:43:35 -- dd/common.sh@18 -- # gen_conf 00:26:04.332 16:43:35 -- dd/common.sh@31 -- # xtrace_disable 00:26:04.332 16:43:35 -- common/autotest_common.sh@10 -- # set +x 00:26:04.332 [2024-07-13 16:43:35.608633] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:04.332 [2024-07-13 16:43:35.609130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143881 ] 00:26:04.332 { 00:26:04.332 "subsystems": [ 00:26:04.332 { 00:26:04.332 "subsystem": "bdev", 00:26:04.332 "config": [ 00:26:04.332 { 00:26:04.332 "params": { 00:26:04.332 "trtype": "pcie", 00:26:04.332 "traddr": "0000:00:06.0", 00:26:04.332 "name": "Nvme0" 00:26:04.332 }, 00:26:04.332 "method": "bdev_nvme_attach_controller" 00:26:04.332 }, 00:26:04.332 { 00:26:04.332 "method": "bdev_wait_for_examine" 00:26:04.332 } 00:26:04.332 ] 00:26:04.332 } 00:26:04.332 ] 00:26:04.332 } 00:26:04.332 [2024-07-13 16:43:35.754036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.590 [2024-07-13 16:43:35.831222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.155  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:05.155 00:26:05.155 16:43:36 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:05.155 16:43:36 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:05.155 16:43:36 -- dd/basic_rw.sh@23 -- # count=7 00:26:05.155 16:43:36 -- dd/basic_rw.sh@24 -- # count=7 00:26:05.155 16:43:36 -- dd/basic_rw.sh@25 -- # size=57344 00:26:05.155 16:43:36 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:26:05.155 16:43:36 -- dd/common.sh@98 -- # xtrace_disable 00:26:05.155 16:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:05.722 16:43:37 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:26:05.722 16:43:37 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:05.722 16:43:37 -- dd/common.sh@31 -- # xtrace_disable 00:26:05.722 16:43:37 -- common/autotest_common.sh@10 -- # set +x 00:26:05.722 { 00:26:05.722 "subsystems": [ 00:26:05.722 { 00:26:05.722 "subsystem": "bdev", 00:26:05.722 "config": [ 00:26:05.722 { 00:26:05.722 "params": { 00:26:05.722 "trtype": "pcie", 00:26:05.722 "traddr": "0000:00:06.0", 00:26:05.722 "name": "Nvme0" 00:26:05.722 }, 00:26:05.722 "method": "bdev_nvme_attach_controller" 00:26:05.722 }, 00:26:05.722 { 00:26:05.722 "method": "bdev_wait_for_examine" 00:26:05.722 } 00:26:05.722 ] 00:26:05.722 } 00:26:05.722 ] 00:26:05.722 } 00:26:05.722 [2024-07-13 16:43:37.140006] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:05.722 [2024-07-13 16:43:37.140466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143901 ] 00:26:05.980 [2024-07-13 16:43:37.301429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.981 [2024-07-13 16:43:37.380771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.808  Copying: 56/56 [kB] (average 27 MBps) 00:26:06.808 00:26:06.808 16:43:38 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:06.808 16:43:38 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:26:06.808 16:43:38 -- dd/common.sh@31 -- # xtrace_disable 00:26:06.808 16:43:38 -- common/autotest_common.sh@10 -- # set +x 00:26:06.808 { 00:26:06.808 "subsystems": [ 00:26:06.808 { 00:26:06.808 "subsystem": "bdev", 00:26:06.808 "config": [ 00:26:06.808 { 00:26:06.808 "params": { 00:26:06.808 "trtype": "pcie", 00:26:06.808 "traddr": "0000:00:06.0", 00:26:06.808 "name": "Nvme0" 00:26:06.808 }, 00:26:06.808 "method": "bdev_nvme_attach_controller" 00:26:06.808 }, 00:26:06.808 { 00:26:06.808 "method": "bdev_wait_for_examine" 00:26:06.808 } 00:26:06.808 ] 00:26:06.808 } 00:26:06.808 ] 00:26:06.808 } 00:26:06.808 [2024-07-13 16:43:38.116006] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:06.808 [2024-07-13 16:43:38.116505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143921 ] 00:26:06.808 [2024-07-13 16:43:38.272993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.067 [2024-07-13 16:43:38.346137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.635  Copying: 56/56 [kB] (average 27 MBps) 00:26:07.635 00:26:07.635 16:43:38 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:07.635 16:43:38 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:26:07.635 16:43:38 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:07.635 16:43:38 -- dd/common.sh@11 -- # local nvme_ref= 00:26:07.635 16:43:38 -- dd/common.sh@12 -- # local size=57344 00:26:07.635 16:43:38 -- dd/common.sh@14 -- # local bs=1048576 00:26:07.635 16:43:38 -- dd/common.sh@15 -- # local count=1 00:26:07.635 16:43:38 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:07.635 16:43:38 -- dd/common.sh@18 -- # gen_conf 00:26:07.635 16:43:38 -- dd/common.sh@31 -- # xtrace_disable 00:26:07.635 16:43:38 -- common/autotest_common.sh@10 -- # set +x 00:26:07.635 { 00:26:07.635 "subsystems": [ 00:26:07.635 { 00:26:07.635 "subsystem": "bdev", 00:26:07.635 "config": [ 00:26:07.635 { 00:26:07.635 "params": { 00:26:07.635 "trtype": "pcie", 00:26:07.635 "traddr": "0000:00:06.0", 00:26:07.635 "name": "Nvme0" 00:26:07.635 }, 00:26:07.635 "method": "bdev_nvme_attach_controller" 00:26:07.635 }, 00:26:07.635 { 00:26:07.635 "method": "bdev_wait_for_examine" 00:26:07.635 } 00:26:07.635 ] 00:26:07.635 } 00:26:07.635 ] 00:26:07.635 } 00:26:07.635 [2024-07-13 16:43:39.041709] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:07.636 [2024-07-13 16:43:39.042251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143937 ] 00:26:07.894 [2024-07-13 16:43:39.196367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.894 [2024-07-13 16:43:39.273014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.412  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:08.412 00:26:08.671 16:43:39 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:08.671 16:43:39 -- dd/basic_rw.sh@23 -- # count=7 00:26:08.671 16:43:39 -- dd/basic_rw.sh@24 -- # count=7 00:26:08.671 16:43:39 -- dd/basic_rw.sh@25 -- # size=57344 00:26:08.671 16:43:39 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:26:08.671 16:43:39 -- dd/common.sh@98 -- # xtrace_disable 00:26:08.671 16:43:39 -- common/autotest_common.sh@10 -- # set +x 00:26:09.239 16:43:40 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:26:09.239 16:43:40 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:09.239 16:43:40 -- dd/common.sh@31 -- # xtrace_disable 00:26:09.239 16:43:40 -- common/autotest_common.sh@10 -- # set +x 00:26:09.239 [2024-07-13 16:43:40.491018] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:09.239 [2024-07-13 16:43:40.491443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143969 ] 00:26:09.239 { 00:26:09.239 "subsystems": [ 00:26:09.239 { 00:26:09.239 "subsystem": "bdev", 00:26:09.239 "config": [ 00:26:09.239 { 00:26:09.239 "params": { 00:26:09.239 "trtype": "pcie", 00:26:09.239 "traddr": "0000:00:06.0", 00:26:09.239 "name": "Nvme0" 00:26:09.239 }, 00:26:09.239 "method": "bdev_nvme_attach_controller" 00:26:09.239 }, 00:26:09.239 { 00:26:09.239 "method": "bdev_wait_for_examine" 00:26:09.239 } 00:26:09.239 ] 00:26:09.239 } 00:26:09.239 ] 00:26:09.239 } 00:26:09.239 [2024-07-13 16:43:40.632221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.239 [2024-07-13 16:43:40.703911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.067  Copying: 56/56 [kB] (average 54 MBps) 00:26:10.067 00:26:10.067 16:43:41 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:10.067 16:43:41 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:26:10.067 16:43:41 -- dd/common.sh@31 -- # xtrace_disable 00:26:10.067 16:43:41 -- common/autotest_common.sh@10 -- # set +x 00:26:10.067 { 00:26:10.067 "subsystems": [ 00:26:10.067 { 00:26:10.067 "subsystem": "bdev", 00:26:10.067 "config": [ 00:26:10.067 { 00:26:10.067 "params": { 00:26:10.067 "trtype": "pcie", 00:26:10.067 "traddr": "0000:00:06.0", 00:26:10.067 "name": "Nvme0" 00:26:10.067 }, 00:26:10.067 "method": "bdev_nvme_attach_controller" 00:26:10.067 }, 00:26:10.067 { 00:26:10.067 "method": "bdev_wait_for_examine" 00:26:10.067 } 00:26:10.067 ] 00:26:10.067 } 00:26:10.067 ] 00:26:10.067 } 00:26:10.067 [2024-07-13 16:43:41.406384] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:10.067 [2024-07-13 16:43:41.406936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143984 ] 00:26:10.327 [2024-07-13 16:43:41.563199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.327 [2024-07-13 16:43:41.639406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.871  Copying: 56/56 [kB] (average 54 MBps) 00:26:10.871 00:26:10.871 16:43:42 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:10.871 16:43:42 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:26:10.871 16:43:42 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:10.871 16:43:42 -- dd/common.sh@11 -- # local nvme_ref= 00:26:10.871 16:43:42 -- dd/common.sh@12 -- # local size=57344 00:26:10.871 16:43:42 -- dd/common.sh@14 -- # local bs=1048576 00:26:10.871 16:43:42 -- dd/common.sh@15 -- # local count=1 00:26:10.871 16:43:42 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:10.871 16:43:42 -- dd/common.sh@18 -- # gen_conf 00:26:10.871 16:43:42 -- dd/common.sh@31 -- # xtrace_disable 00:26:10.871 16:43:42 -- common/autotest_common.sh@10 -- # set +x 00:26:10.871 { 00:26:10.871 "subsystems": [ 00:26:10.871 { 00:26:10.871 "subsystem": "bdev", 00:26:10.871 "config": [ 00:26:10.871 { 00:26:10.871 "params": { 00:26:10.871 "trtype": "pcie", 00:26:10.871 "traddr": "0000:00:06.0", 00:26:10.871 "name": "Nvme0" 00:26:10.871 }, 00:26:10.871 "method": "bdev_nvme_attach_controller" 00:26:10.871 }, 00:26:10.871 { 00:26:10.871 "method": "bdev_wait_for_examine" 00:26:10.871 } 00:26:10.871 ] 00:26:10.871 } 00:26:10.871 ] 00:26:10.871 } 00:26:10.871 [2024-07-13 16:43:42.321658] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:10.871 [2024-07-13 16:43:42.322105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144005 ] 00:26:11.142 [2024-07-13 16:43:42.477610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.142 [2024-07-13 16:43:42.547182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.966  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:11.966 00:26:11.966 16:43:43 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:11.966 16:43:43 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:11.966 16:43:43 -- dd/basic_rw.sh@23 -- # count=3 00:26:11.966 16:43:43 -- dd/basic_rw.sh@24 -- # count=3 00:26:11.966 16:43:43 -- dd/basic_rw.sh@25 -- # size=49152 00:26:11.966 16:43:43 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:26:11.966 16:43:43 -- dd/common.sh@98 -- # xtrace_disable 00:26:11.966 16:43:43 -- common/autotest_common.sh@10 -- # set +x 00:26:12.223 16:43:43 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:26:12.223 16:43:43 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:12.223 16:43:43 -- dd/common.sh@31 -- # xtrace_disable 00:26:12.223 16:43:43 -- common/autotest_common.sh@10 -- # set +x 00:26:12.223 { 00:26:12.223 "subsystems": [ 00:26:12.223 { 00:26:12.224 "subsystem": "bdev", 00:26:12.224 "config": [ 00:26:12.224 { 00:26:12.224 "params": { 00:26:12.224 "trtype": "pcie", 00:26:12.224 "traddr": "0000:00:06.0", 00:26:12.224 "name": "Nvme0" 00:26:12.224 }, 00:26:12.224 "method": "bdev_nvme_attach_controller" 00:26:12.224 }, 00:26:12.224 { 00:26:12.224 "method": "bdev_wait_for_examine" 00:26:12.224 } 00:26:12.224 ] 00:26:12.224 } 00:26:12.224 ] 00:26:12.224 } 00:26:12.224 [2024-07-13 16:43:43.664754] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:12.224 [2024-07-13 16:43:43.665175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144025 ] 00:26:12.481 [2024-07-13 16:43:43.820075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.482 [2024-07-13 16:43:43.890737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.305  Copying: 48/48 [kB] (average 46 MBps) 00:26:13.305 00:26:13.305 16:43:44 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:26:13.305 16:43:44 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:13.305 16:43:44 -- dd/common.sh@31 -- # xtrace_disable 00:26:13.305 16:43:44 -- common/autotest_common.sh@10 -- # set +x 00:26:13.305 { 00:26:13.305 "subsystems": [ 00:26:13.305 { 00:26:13.305 "subsystem": "bdev", 00:26:13.305 "config": [ 00:26:13.305 { 00:26:13.305 "params": { 00:26:13.305 "trtype": "pcie", 00:26:13.305 "traddr": "0000:00:06.0", 00:26:13.305 "name": "Nvme0" 00:26:13.305 }, 00:26:13.305 "method": "bdev_nvme_attach_controller" 00:26:13.305 }, 00:26:13.305 { 00:26:13.305 "method": "bdev_wait_for_examine" 00:26:13.305 } 00:26:13.305 ] 00:26:13.305 } 00:26:13.305 ] 00:26:13.305 } 00:26:13.305 [2024-07-13 16:43:44.599959] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:13.305 [2024-07-13 16:43:44.600420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144047 ] 00:26:13.305 [2024-07-13 16:43:44.755488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.563 [2024-07-13 16:43:44.822925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.130  Copying: 48/48 [kB] (average 46 MBps) 00:26:14.130 00:26:14.130 16:43:45 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:14.130 16:43:45 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:26:14.130 16:43:45 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:14.130 16:43:45 -- dd/common.sh@11 -- # local nvme_ref= 00:26:14.130 16:43:45 -- dd/common.sh@12 -- # local size=49152 00:26:14.130 16:43:45 -- dd/common.sh@14 -- # local bs=1048576 00:26:14.130 16:43:45 -- dd/common.sh@15 -- # local count=1 00:26:14.130 16:43:45 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:14.130 16:43:45 -- dd/common.sh@18 -- # gen_conf 00:26:14.130 16:43:45 -- dd/common.sh@31 -- # xtrace_disable 00:26:14.130 16:43:45 -- common/autotest_common.sh@10 -- # set +x 00:26:14.130 { 00:26:14.130 "subsystems": [ 00:26:14.130 { 00:26:14.130 "subsystem": "bdev", 00:26:14.130 "config": [ 00:26:14.130 { 00:26:14.130 "params": { 00:26:14.130 "trtype": "pcie", 00:26:14.130 "traddr": "0000:00:06.0", 00:26:14.130 "name": "Nvme0" 00:26:14.130 }, 00:26:14.130 "method": "bdev_nvme_attach_controller" 00:26:14.130 }, 00:26:14.130 { 00:26:14.130 "method": "bdev_wait_for_examine" 00:26:14.130 } 00:26:14.130 ] 00:26:14.130 } 00:26:14.130 ] 00:26:14.130 } 00:26:14.130 [2024-07-13 16:43:45.517820] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:14.130 [2024-07-13 16:43:45.518243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144068 ] 00:26:14.389 [2024-07-13 16:43:45.672315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.389 [2024-07-13 16:43:45.738578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.906  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:14.906 00:26:14.906 16:43:46 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:14.906 16:43:46 -- dd/basic_rw.sh@23 -- # count=3 00:26:14.906 16:43:46 -- dd/basic_rw.sh@24 -- # count=3 00:26:14.906 16:43:46 -- dd/basic_rw.sh@25 -- # size=49152 00:26:14.906 16:43:46 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:26:14.906 16:43:46 -- dd/common.sh@98 -- # xtrace_disable 00:26:14.906 16:43:46 -- common/autotest_common.sh@10 -- # set +x 00:26:15.472 16:43:46 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:26:15.472 16:43:46 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:15.472 16:43:46 -- dd/common.sh@31 -- # xtrace_disable 00:26:15.473 16:43:46 -- common/autotest_common.sh@10 -- # set +x 00:26:15.473 { 00:26:15.473 "subsystems": [ 00:26:15.473 { 00:26:15.473 "subsystem": "bdev", 00:26:15.473 "config": [ 00:26:15.473 { 00:26:15.473 "params": { 00:26:15.473 "trtype": "pcie", 00:26:15.473 "traddr": "0000:00:06.0", 00:26:15.473 "name": "Nvme0" 00:26:15.473 }, 00:26:15.473 "method": "bdev_nvme_attach_controller" 00:26:15.473 }, 00:26:15.473 { 00:26:15.473 "method": "bdev_wait_for_examine" 00:26:15.473 } 00:26:15.473 ] 00:26:15.473 } 00:26:15.473 ] 00:26:15.473 } 00:26:15.473 [2024-07-13 16:43:46.857865] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:15.473 [2024-07-13 16:43:46.858306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144088 ] 00:26:15.731 [2024-07-13 16:43:47.012677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.731 [2024-07-13 16:43:47.081361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.249  Copying: 48/48 [kB] (average 46 MBps) 00:26:16.249 00:26:16.249 16:43:47 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:16.249 16:43:47 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:26:16.249 16:43:47 -- dd/common.sh@31 -- # xtrace_disable 00:26:16.249 16:43:47 -- common/autotest_common.sh@10 -- # set +x 00:26:16.509 { 00:26:16.509 "subsystems": [ 00:26:16.509 { 00:26:16.509 "subsystem": "bdev", 00:26:16.509 "config": [ 00:26:16.509 { 00:26:16.509 "params": { 00:26:16.509 "trtype": "pcie", 00:26:16.509 "traddr": "0000:00:06.0", 00:26:16.509 "name": "Nvme0" 00:26:16.509 }, 00:26:16.509 "method": "bdev_nvme_attach_controller" 00:26:16.509 }, 00:26:16.509 { 00:26:16.509 "method": "bdev_wait_for_examine" 00:26:16.509 } 00:26:16.509 ] 00:26:16.509 } 00:26:16.509 ] 00:26:16.509 } 00:26:16.509 [2024-07-13 16:43:47.759658] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:16.509 [2024-07-13 16:43:47.760653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144107 ] 00:26:16.509 [2024-07-13 16:43:47.916333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.768 [2024-07-13 16:43:47.983489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.336  Copying: 48/48 [kB] (average 46 MBps) 00:26:17.336 00:26:17.336 16:43:48 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:17.336 16:43:48 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:26:17.336 16:43:48 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:17.336 16:43:48 -- dd/common.sh@11 -- # local nvme_ref= 00:26:17.336 16:43:48 -- dd/common.sh@12 -- # local size=49152 00:26:17.337 16:43:48 -- dd/common.sh@14 -- # local bs=1048576 00:26:17.337 16:43:48 -- dd/common.sh@15 -- # local count=1 00:26:17.337 16:43:48 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:17.337 16:43:48 -- dd/common.sh@18 -- # gen_conf 00:26:17.337 16:43:48 -- dd/common.sh@31 -- # xtrace_disable 00:26:17.337 16:43:48 -- common/autotest_common.sh@10 -- # set +x 00:26:17.337 [2024-07-13 16:43:48.681920] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:17.337 [2024-07-13 16:43:48.682531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144124 ] 00:26:17.337 { 00:26:17.337 "subsystems": [ 00:26:17.337 { 00:26:17.337 "subsystem": "bdev", 00:26:17.337 "config": [ 00:26:17.337 { 00:26:17.337 "params": { 00:26:17.337 "trtype": "pcie", 00:26:17.337 "traddr": "0000:00:06.0", 00:26:17.337 "name": "Nvme0" 00:26:17.337 }, 00:26:17.337 "method": "bdev_nvme_attach_controller" 00:26:17.337 }, 00:26:17.337 { 00:26:17.337 "method": "bdev_wait_for_examine" 00:26:17.337 } 00:26:17.337 ] 00:26:17.337 } 00:26:17.337 ] 00:26:17.337 } 00:26:17.595 [2024-07-13 16:43:48.827607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.595 [2024-07-13 16:43:48.896347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.112  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:18.112 00:26:18.112 ************************************ 00:26:18.112 END TEST dd_rw 00:26:18.112 ************************************ 00:26:18.112 00:26:18.112 real 0m19.718s 00:26:18.112 user 0m12.693s 00:26:18.112 sys 0m5.569s 00:26:18.112 16:43:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:18.112 16:43:49 -- common/autotest_common.sh@10 -- # set +x 00:26:18.371 16:43:49 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:26:18.371 16:43:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:18.371 16:43:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:18.371 16:43:49 -- common/autotest_common.sh@10 -- # set +x 00:26:18.371 ************************************ 00:26:18.371 START TEST dd_rw_offset 00:26:18.371 ************************************ 00:26:18.371 16:43:49 -- common/autotest_common.sh@1104 -- # basic_offset 00:26:18.371 16:43:49 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:26:18.371 16:43:49 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:26:18.371 16:43:49 -- dd/common.sh@98 -- # xtrace_disable 00:26:18.371 16:43:49 -- common/autotest_common.sh@10 -- # set +x 00:26:18.371 16:43:49 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:26:18.372 16:43:49 -- dd/basic_rw.sh@56 -- # data=r7s93ek747o589nzq4f4vqslzcxzxj5gvend0d5zmqu791tplkz8ak53166i4drz3vx5fxvgsar4kr6547ertsch1cuzmpfkuh9743b4iiwyqtf610nf35pjvbp4l5mvn1tjglmg5bgl28r0odptmqvm8p41zmd0kg7qqit2mi9714o1p5gtcvsohv7or72jj7nmvovkgoymxrmrwon9de1bf2uug9t3oolf7457gxj3g1w8shbt575n29n9vz0w8oaodc3v488qi9h3b45yu2eo2meql69r38pvhdv08m1cgnemzefdhzdtwobuk6fzctwq7ajdqeqen3eul7nxx55t6rg8fv7q9f42ea1zlxscmy00gaqssxf3f97ti0vnoadl93xrlg2tq61ki01jau65bk8augptpfhwmpv2zlwn059myouz9qvhsahhpl0vavhdjm0c5ekecpasq5jcf8qvqozq1f0cwgtc6y73pr3s0dhqqghttstk1ztfu6ekvfmt4q7ucp8mm8xod0emznkvjrg7gzft5wjw14a6cgh6kqm89bg1gxrfor669lxreyswccg18qv4shhikl5vstg6059b6op9uym418vvebn29i8d5mdpwydopnt1vec5pjogqdlofi16a4af9406gog6k0p36rw2fv7v0r9xkgglfs7h4fe230hc7ca1072h6dhuyjfxm822h44yrtys7b8wwxepf22w3z83ap3cgxn267c9m0exifgvaf0j7oid1lv1qm2piw7qwx2u373fnn7f0vu7rrcmooai3yudrsedy3fw65ir5fgte293k1r50kmb1q0e68hxv553wbyvkfe8pv0xtjh2ixew2wago62qvrvdspvmj9hodqpsd7yzdm633x5vcr6zboppkgfqgurj0vtcvgvtoyeb3yu800ym3h3k4o9bba4z60aa3djad5rx3pbdnxlzcc5m7s009k7e2ifwrfjmxlhhoj82eg7p3n4zjut26rnhmgaixz739n4tyt3f2cbv6dydb5w3v6ujz3synswlg4b6ndk5hjjc9d026jni67cum7vm84zjelh0cmrd1msqzqqte2cg40f6blsucnbk1locyvtc18mxkbkjr8pkh8mlfnovfgqnpesnhlzjvfp5rn1np8l3l8hrx4n8snqpc4pi1l2ouxtkzdtpo706eox6agd86gj91uf87kiszz1x8frqwyzb7d1n3kslboj4ogmqpompa7c2mpij7yonjvn8y6tjb62gcrkfy3807sqsva3ylkv9gg29c6e6mfr0dqqgb0c06gdrphkhm0l0fm40ampwxoc1n22tomt49g2xgohgmj6bn3f36v47sz9lg8gsefgcy51lhrkmx3wayy8jzjb8vjntvqkoj5z83cpzyk4vs1py3tq8f3dbhyhinlbkbadu1nt4x4pdvhyy6gzih2ajvygtjbfeobki9stroc8m0y9l0fh9on5ziw8wvv1mbm735amqdg59y55aa998c93xqqu92hm6q4skl3wgzj2ue0mfq9wxgpc94ttiyy8k1j59feftpom4cn3h1ax3twolbqrhqqoocoqbgpy5s6wkd9mc1dkdgexcms1c2gzwg9wnq2855q076idmr2btz66iexjdppg5j681v8dwc4nm0pc06d98tj8rmes1a6avtcb6vjkitg6bq9ae8uc8ltycvde57eljed2xqn33h5hc3pn29zbpdumx6n0gi7ywstdk1i0k3ou1p769tzkgijud3w6sjwg05xmuzbaj9zp7bfllp4m51bt0hwsb0wuf9lt2ucwsdgurq7ejujltfpilp77frrv787fecmnbb0o60rwg0ge0ex66zwkrih02jcl9t39i04750tzsmh9v8jq1xgqj3jqrhunetnusjbw7uobzftmzi3sgzidyxlp54n2p8arstqea2tk4llgmj6qay506u8s8xx6n4dfmshlee4riv345bbaszwotz1nb1rnvjmybori4ru3kfwumcha4j39fmeo83xgf32q92npofkmr6q186xgky7vjre3f8adlcjtrq7thhe13azr74zy6p6zibdosnvlm8b14ucy4hd5m6kr0s9o5436j0nfe1yiofu1hfjc2zp3a6knl93a1a2lcvaszpgto8i5ubobae8qmv34bkjxggl45slizletu5b3gyyh3b4wjqtkhkjy7zs9dxqun97j8bdsyuhfo5mjetutuhjiqydgl8cduwz2cu8o8plmq8dl180wo8nayinc8pm5bd5c62nf563amraj0pnr4x3n1yrvq9puqbhqzgka8nus95ubs9z5si1alqv9sn0icbz2br4cl8jeax1t4arntylp2yg08guepv450o3c9b34nipcxvyemfkgql1sza1p8z6ibzp1qrtd9knhl2euyrntd1ndkeuvg9fjeljuj55adwzjs3vr6a1x7hhnvmggxhmr2ajs2680ri4qsv5meybx1vduf162p9bxug40u2vk2mw5xays5axikeoq2xagdgl85lynrhgwdyeus8hgb7dju60dnn9l10ht2hr580df3v1oti8zsj38oexsl8ryjrozil12tzweapd46ng0h4gbyil8ywyw33jciorhrxssyfumtdcwxaj84rxavpgo91xybjcmqo8g3ktirfcrawtdvde7hcvanx2czshbg3grg8obrbw5cqbi38nezgd5rmygbzsw7asvuog0p3rkjfxjfrfvg5n9a0s0sr2xvu7hwcw0izphj7xjky9dmmed16s1exvpu5jmqq080rzvyml06j2pn7bc76apij0jbeplv10wzii4rtfnc33hrw0c1p7fmoponmps36a11clk7d9wm1kua389ail0n3ciuitypp14x3iji5v0omfiwvvkfdq39yltuw6q4n8cfygci7h3jyc559q24t3wxn8nzmx97dqm8epq8bvh4afh3dwf4va8hwhbh8xqun3ma4g3w41gb5u2ap6ehimzf56u1oas0hsvwzw4bi36xqgmbl79oqnmtyced85n8yt7u8cnl5dhqcsj9wncxpgw4qpm4fxmbdm2wrih5zvq3mqv1emoi99uuffd9y4v99v0gcxypoox8zwpj04ycf590f790l2vklnm63clbsveziwnj7jlvvmct2em1ybl4pea5oar11lnsxd3ug8ifm6rhuxm43gp6e8e88b4jpvyk6spd9cgnadj8wdt9f9in2bqpe29mahwnn8lvejh78ikw3urizdi8ano43c08xddvevpetkv1k4rtu9jmtsjb5o1axy3llf6uz3y1j858ke55rlgqdhaustglppa2tjrqy4whlt934g04f9at6u9w0gep7tjhsi1ymgtat6mnd9no5jygym5c2v94ru2yjni04q5kroqog4900627axi4hcfcetnxr5tlogik2zy3hqkenhctnwkvf66742ewu6muw1k5q0u16ceeogvq39n2nd8l0ql38s0phl8wonb4ps0xb26hep59xjt5r9ea5kzgrv5k4a28m47qf7irrnpzfejl1ul5qrnw4n2wtu7wic5soa5gqshbkhx1m2s2srgodcucojv4z38h431vobnl64yt7xhhuhis4noz8lwnozgq8xk3wcky0g12m23bcasp6wjf5y2b4ltwyjwxheqqleisl1uzbkjk55fzim90f2kjbf6pmjl3ltf7jyta8mskwbb6296hzerk17cr3fqzyckp6uf9087oqd5kemvrjqucexivud8zscpk8e216mslxd4n31aa0dx6bsucpd1g7k3lrfk1mbylxs0n491f83o8w4q1e2g64qytp66y3m15q5tno06buiar9k08lhr8roj6wfhts3qasy0tp89uk7djdgjuasj5e7t06zqa76zv7yn5yvqjvqjl5r3qc25a7q6iprqdh88bj8fiq91eoq1u3jyntqa1zt1b93qtrben55uenamo8nzg3n86wx77xeni1c4gmoy48ev7jw08d76z8z7ldp9vt6347fd7wh32glef 00:26:18.372 16:43:49 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:26:18.372 16:43:49 -- dd/basic_rw.sh@59 -- # gen_conf 00:26:18.372 16:43:49 -- dd/common.sh@31 -- # xtrace_disable 00:26:18.372 16:43:49 -- common/autotest_common.sh@10 -- # set +x 00:26:18.372 { 00:26:18.372 "subsystems": [ 00:26:18.372 { 00:26:18.372 "subsystem": "bdev", 00:26:18.372 "config": [ 00:26:18.372 { 00:26:18.372 "params": { 00:26:18.372 "trtype": "pcie", 00:26:18.372 "traddr": "0000:00:06.0", 00:26:18.372 "name": "Nvme0" 00:26:18.372 }, 00:26:18.372 "method": "bdev_nvme_attach_controller" 00:26:18.372 }, 00:26:18.372 { 00:26:18.372 "method": "bdev_wait_for_examine" 00:26:18.372 } 00:26:18.372 ] 00:26:18.372 } 00:26:18.372 ] 00:26:18.372 } 00:26:18.372 [2024-07-13 16:43:49.753001] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:18.372 [2024-07-13 16:43:49.753284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144164 ] 00:26:18.631 [2024-07-13 16:43:49.909853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.631 [2024-07-13 16:43:50.012487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.458  Copying: 4096/4096 [B] (average 4000 kBps) 00:26:19.458 00:26:19.458 16:43:50 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:26:19.458 16:43:50 -- dd/basic_rw.sh@65 -- # gen_conf 00:26:19.458 16:43:50 -- dd/common.sh@31 -- # xtrace_disable 00:26:19.458 16:43:50 -- common/autotest_common.sh@10 -- # set +x 00:26:19.458 { 00:26:19.458 "subsystems": [ 00:26:19.458 { 00:26:19.458 "subsystem": "bdev", 00:26:19.458 "config": [ 00:26:19.458 { 00:26:19.458 "params": { 00:26:19.458 "trtype": "pcie", 00:26:19.458 "traddr": "0000:00:06.0", 00:26:19.458 "name": "Nvme0" 00:26:19.458 }, 00:26:19.458 "method": "bdev_nvme_attach_controller" 00:26:19.458 }, 00:26:19.458 { 00:26:19.458 "method": "bdev_wait_for_examine" 00:26:19.458 } 00:26:19.458 ] 00:26:19.458 } 00:26:19.458 ] 00:26:19.458 } 00:26:19.458 [2024-07-13 16:43:50.822068] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:19.458 [2024-07-13 16:43:50.823090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144188 ] 00:26:19.718 [2024-07-13 16:43:50.978842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.718 [2024-07-13 16:43:51.078330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.545  Copying: 4096/4096 [B] (average 4000 kBps) 00:26:20.545 00:26:20.545 16:43:51 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:26:20.545 ************************************ 00:26:20.545 END TEST dd_rw_offset 00:26:20.545 ************************************ 00:26:20.546 16:43:51 -- dd/basic_rw.sh@72 -- # [[ r7s93ek747o589nzq4f4vqslzcxzxj5gvend0d5zmqu791tplkz8ak53166i4drz3vx5fxvgsar4kr6547ertsch1cuzmpfkuh9743b4iiwyqtf610nf35pjvbp4l5mvn1tjglmg5bgl28r0odptmqvm8p41zmd0kg7qqit2mi9714o1p5gtcvsohv7or72jj7nmvovkgoymxrmrwon9de1bf2uug9t3oolf7457gxj3g1w8shbt575n29n9vz0w8oaodc3v488qi9h3b45yu2eo2meql69r38pvhdv08m1cgnemzefdhzdtwobuk6fzctwq7ajdqeqen3eul7nxx55t6rg8fv7q9f42ea1zlxscmy00gaqssxf3f97ti0vnoadl93xrlg2tq61ki01jau65bk8augptpfhwmpv2zlwn059myouz9qvhsahhpl0vavhdjm0c5ekecpasq5jcf8qvqozq1f0cwgtc6y73pr3s0dhqqghttstk1ztfu6ekvfmt4q7ucp8mm8xod0emznkvjrg7gzft5wjw14a6cgh6kqm89bg1gxrfor669lxreyswccg18qv4shhikl5vstg6059b6op9uym418vvebn29i8d5mdpwydopnt1vec5pjogqdlofi16a4af9406gog6k0p36rw2fv7v0r9xkgglfs7h4fe230hc7ca1072h6dhuyjfxm822h44yrtys7b8wwxepf22w3z83ap3cgxn267c9m0exifgvaf0j7oid1lv1qm2piw7qwx2u373fnn7f0vu7rrcmooai3yudrsedy3fw65ir5fgte293k1r50kmb1q0e68hxv553wbyvkfe8pv0xtjh2ixew2wago62qvrvdspvmj9hodqpsd7yzdm633x5vcr6zboppkgfqgurj0vtcvgvtoyeb3yu800ym3h3k4o9bba4z60aa3djad5rx3pbdnxlzcc5m7s009k7e2ifwrfjmxlhhoj82eg7p3n4zjut26rnhmgaixz739n4tyt3f2cbv6dydb5w3v6ujz3synswlg4b6ndk5hjjc9d026jni67cum7vm84zjelh0cmrd1msqzqqte2cg40f6blsucnbk1locyvtc18mxkbkjr8pkh8mlfnovfgqnpesnhlzjvfp5rn1np8l3l8hrx4n8snqpc4pi1l2ouxtkzdtpo706eox6agd86gj91uf87kiszz1x8frqwyzb7d1n3kslboj4ogmqpompa7c2mpij7yonjvn8y6tjb62gcrkfy3807sqsva3ylkv9gg29c6e6mfr0dqqgb0c06gdrphkhm0l0fm40ampwxoc1n22tomt49g2xgohgmj6bn3f36v47sz9lg8gsefgcy51lhrkmx3wayy8jzjb8vjntvqkoj5z83cpzyk4vs1py3tq8f3dbhyhinlbkbadu1nt4x4pdvhyy6gzih2ajvygtjbfeobki9stroc8m0y9l0fh9on5ziw8wvv1mbm735amqdg59y55aa998c93xqqu92hm6q4skl3wgzj2ue0mfq9wxgpc94ttiyy8k1j59feftpom4cn3h1ax3twolbqrhqqoocoqbgpy5s6wkd9mc1dkdgexcms1c2gzwg9wnq2855q076idmr2btz66iexjdppg5j681v8dwc4nm0pc06d98tj8rmes1a6avtcb6vjkitg6bq9ae8uc8ltycvde57eljed2xqn33h5hc3pn29zbpdumx6n0gi7ywstdk1i0k3ou1p769tzkgijud3w6sjwg05xmuzbaj9zp7bfllp4m51bt0hwsb0wuf9lt2ucwsdgurq7ejujltfpilp77frrv787fecmnbb0o60rwg0ge0ex66zwkrih02jcl9t39i04750tzsmh9v8jq1xgqj3jqrhunetnusjbw7uobzftmzi3sgzidyxlp54n2p8arstqea2tk4llgmj6qay506u8s8xx6n4dfmshlee4riv345bbaszwotz1nb1rnvjmybori4ru3kfwumcha4j39fmeo83xgf32q92npofkmr6q186xgky7vjre3f8adlcjtrq7thhe13azr74zy6p6zibdosnvlm8b14ucy4hd5m6kr0s9o5436j0nfe1yiofu1hfjc2zp3a6knl93a1a2lcvaszpgto8i5ubobae8qmv34bkjxggl45slizletu5b3gyyh3b4wjqtkhkjy7zs9dxqun97j8bdsyuhfo5mjetutuhjiqydgl8cduwz2cu8o8plmq8dl180wo8nayinc8pm5bd5c62nf563amraj0pnr4x3n1yrvq9puqbhqzgka8nus95ubs9z5si1alqv9sn0icbz2br4cl8jeax1t4arntylp2yg08guepv450o3c9b34nipcxvyemfkgql1sza1p8z6ibzp1qrtd9knhl2euyrntd1ndkeuvg9fjeljuj55adwzjs3vr6a1x7hhnvmggxhmr2ajs2680ri4qsv5meybx1vduf162p9bxug40u2vk2mw5xays5axikeoq2xagdgl85lynrhgwdyeus8hgb7dju60dnn9l10ht2hr580df3v1oti8zsj38oexsl8ryjrozil12tzweapd46ng0h4gbyil8ywyw33jciorhrxssyfumtdcwxaj84rxavpgo91xybjcmqo8g3ktirfcrawtdvde7hcvanx2czshbg3grg8obrbw5cqbi38nezgd5rmygbzsw7asvuog0p3rkjfxjfrfvg5n9a0s0sr2xvu7hwcw0izphj7xjky9dmmed16s1exvpu5jmqq080rzvyml06j2pn7bc76apij0jbeplv10wzii4rtfnc33hrw0c1p7fmoponmps36a11clk7d9wm1kua389ail0n3ciuitypp14x3iji5v0omfiwvvkfdq39yltuw6q4n8cfygci7h3jyc559q24t3wxn8nzmx97dqm8epq8bvh4afh3dwf4va8hwhbh8xqun3ma4g3w41gb5u2ap6ehimzf56u1oas0hsvwzw4bi36xqgmbl79oqnmtyced85n8yt7u8cnl5dhqcsj9wncxpgw4qpm4fxmbdm2wrih5zvq3mqv1emoi99uuffd9y4v99v0gcxypoox8zwpj04ycf590f790l2vklnm63clbsveziwnj7jlvvmct2em1ybl4pea5oar11lnsxd3ug8ifm6rhuxm43gp6e8e88b4jpvyk6spd9cgnadj8wdt9f9in2bqpe29mahwnn8lvejh78ikw3urizdi8ano43c08xddvevpetkv1k4rtu9jmtsjb5o1axy3llf6uz3y1j858ke55rlgqdhaustglppa2tjrqy4whlt934g04f9at6u9w0gep7tjhsi1ymgtat6mnd9no5jygym5c2v94ru2yjni04q5kroqog4900627axi4hcfcetnxr5tlogik2zy3hqkenhctnwkvf66742ewu6muw1k5q0u16ceeogvq39n2nd8l0ql38s0phl8wonb4ps0xb26hep59xjt5r9ea5kzgrv5k4a28m47qf7irrnpzfejl1ul5qrnw4n2wtu7wic5soa5gqshbkhx1m2s2srgodcucojv4z38h431vobnl64yt7xhhuhis4noz8lwnozgq8xk3wcky0g12m23bcasp6wjf5y2b4ltwyjwxheqqleisl1uzbkjk55fzim90f2kjbf6pmjl3ltf7jyta8mskwbb6296hzerk17cr3fqzyckp6uf9087oqd5kemvrjqucexivud8zscpk8e216mslxd4n31aa0dx6bsucpd1g7k3lrfk1mbylxs0n491f83o8w4q1e2g64qytp66y3m15q5tno06buiar9k08lhr8roj6wfhts3qasy0tp89uk7djdgjuasj5e7t06zqa76zv7yn5yvqjvqjl5r3qc25a7q6iprqdh88bj8fiq91eoq1u3jyntqa1zt1b93qtrben55uenamo8nzg3n86wx77xeni1c4gmoy48ev7jw08d76z8z7ldp9vt6347fd7wh32glef == \r\7\s\9\3\e\k\7\4\7\o\5\8\9\n\z\q\4\f\4\v\q\s\l\z\c\x\z\x\j\5\g\v\e\n\d\0\d\5\z\m\q\u\7\9\1\t\p\l\k\z\8\a\k\5\3\1\6\6\i\4\d\r\z\3\v\x\5\f\x\v\g\s\a\r\4\k\r\6\5\4\7\e\r\t\s\c\h\1\c\u\z\m\p\f\k\u\h\9\7\4\3\b\4\i\i\w\y\q\t\f\6\1\0\n\f\3\5\p\j\v\b\p\4\l\5\m\v\n\1\t\j\g\l\m\g\5\b\g\l\2\8\r\0\o\d\p\t\m\q\v\m\8\p\4\1\z\m\d\0\k\g\7\q\q\i\t\2\m\i\9\7\1\4\o\1\p\5\g\t\c\v\s\o\h\v\7\o\r\7\2\j\j\7\n\m\v\o\v\k\g\o\y\m\x\r\m\r\w\o\n\9\d\e\1\b\f\2\u\u\g\9\t\3\o\o\l\f\7\4\5\7\g\x\j\3\g\1\w\8\s\h\b\t\5\7\5\n\2\9\n\9\v\z\0\w\8\o\a\o\d\c\3\v\4\8\8\q\i\9\h\3\b\4\5\y\u\2\e\o\2\m\e\q\l\6\9\r\3\8\p\v\h\d\v\0\8\m\1\c\g\n\e\m\z\e\f\d\h\z\d\t\w\o\b\u\k\6\f\z\c\t\w\q\7\a\j\d\q\e\q\e\n\3\e\u\l\7\n\x\x\5\5\t\6\r\g\8\f\v\7\q\9\f\4\2\e\a\1\z\l\x\s\c\m\y\0\0\g\a\q\s\s\x\f\3\f\9\7\t\i\0\v\n\o\a\d\l\9\3\x\r\l\g\2\t\q\6\1\k\i\0\1\j\a\u\6\5\b\k\8\a\u\g\p\t\p\f\h\w\m\p\v\2\z\l\w\n\0\5\9\m\y\o\u\z\9\q\v\h\s\a\h\h\p\l\0\v\a\v\h\d\j\m\0\c\5\e\k\e\c\p\a\s\q\5\j\c\f\8\q\v\q\o\z\q\1\f\0\c\w\g\t\c\6\y\7\3\p\r\3\s\0\d\h\q\q\g\h\t\t\s\t\k\1\z\t\f\u\6\e\k\v\f\m\t\4\q\7\u\c\p\8\m\m\8\x\o\d\0\e\m\z\n\k\v\j\r\g\7\g\z\f\t\5\w\j\w\1\4\a\6\c\g\h\6\k\q\m\8\9\b\g\1\g\x\r\f\o\r\6\6\9\l\x\r\e\y\s\w\c\c\g\1\8\q\v\4\s\h\h\i\k\l\5\v\s\t\g\6\0\5\9\b\6\o\p\9\u\y\m\4\1\8\v\v\e\b\n\2\9\i\8\d\5\m\d\p\w\y\d\o\p\n\t\1\v\e\c\5\p\j\o\g\q\d\l\o\f\i\1\6\a\4\a\f\9\4\0\6\g\o\g\6\k\0\p\3\6\r\w\2\f\v\7\v\0\r\9\x\k\g\g\l\f\s\7\h\4\f\e\2\3\0\h\c\7\c\a\1\0\7\2\h\6\d\h\u\y\j\f\x\m\8\2\2\h\4\4\y\r\t\y\s\7\b\8\w\w\x\e\p\f\2\2\w\3\z\8\3\a\p\3\c\g\x\n\2\6\7\c\9\m\0\e\x\i\f\g\v\a\f\0\j\7\o\i\d\1\l\v\1\q\m\2\p\i\w\7\q\w\x\2\u\3\7\3\f\n\n\7\f\0\v\u\7\r\r\c\m\o\o\a\i\3\y\u\d\r\s\e\d\y\3\f\w\6\5\i\r\5\f\g\t\e\2\9\3\k\1\r\5\0\k\m\b\1\q\0\e\6\8\h\x\v\5\5\3\w\b\y\v\k\f\e\8\p\v\0\x\t\j\h\2\i\x\e\w\2\w\a\g\o\6\2\q\v\r\v\d\s\p\v\m\j\9\h\o\d\q\p\s\d\7\y\z\d\m\6\3\3\x\5\v\c\r\6\z\b\o\p\p\k\g\f\q\g\u\r\j\0\v\t\c\v\g\v\t\o\y\e\b\3\y\u\8\0\0\y\m\3\h\3\k\4\o\9\b\b\a\4\z\6\0\a\a\3\d\j\a\d\5\r\x\3\p\b\d\n\x\l\z\c\c\5\m\7\s\0\0\9\k\7\e\2\i\f\w\r\f\j\m\x\l\h\h\o\j\8\2\e\g\7\p\3\n\4\z\j\u\t\2\6\r\n\h\m\g\a\i\x\z\7\3\9\n\4\t\y\t\3\f\2\c\b\v\6\d\y\d\b\5\w\3\v\6\u\j\z\3\s\y\n\s\w\l\g\4\b\6\n\d\k\5\h\j\j\c\9\d\0\2\6\j\n\i\6\7\c\u\m\7\v\m\8\4\z\j\e\l\h\0\c\m\r\d\1\m\s\q\z\q\q\t\e\2\c\g\4\0\f\6\b\l\s\u\c\n\b\k\1\l\o\c\y\v\t\c\1\8\m\x\k\b\k\j\r\8\p\k\h\8\m\l\f\n\o\v\f\g\q\n\p\e\s\n\h\l\z\j\v\f\p\5\r\n\1\n\p\8\l\3\l\8\h\r\x\4\n\8\s\n\q\p\c\4\p\i\1\l\2\o\u\x\t\k\z\d\t\p\o\7\0\6\e\o\x\6\a\g\d\8\6\g\j\9\1\u\f\8\7\k\i\s\z\z\1\x\8\f\r\q\w\y\z\b\7\d\1\n\3\k\s\l\b\o\j\4\o\g\m\q\p\o\m\p\a\7\c\2\m\p\i\j\7\y\o\n\j\v\n\8\y\6\t\j\b\6\2\g\c\r\k\f\y\3\8\0\7\s\q\s\v\a\3\y\l\k\v\9\g\g\2\9\c\6\e\6\m\f\r\0\d\q\q\g\b\0\c\0\6\g\d\r\p\h\k\h\m\0\l\0\f\m\4\0\a\m\p\w\x\o\c\1\n\2\2\t\o\m\t\4\9\g\2\x\g\o\h\g\m\j\6\b\n\3\f\3\6\v\4\7\s\z\9\l\g\8\g\s\e\f\g\c\y\5\1\l\h\r\k\m\x\3\w\a\y\y\8\j\z\j\b\8\v\j\n\t\v\q\k\o\j\5\z\8\3\c\p\z\y\k\4\v\s\1\p\y\3\t\q\8\f\3\d\b\h\y\h\i\n\l\b\k\b\a\d\u\1\n\t\4\x\4\p\d\v\h\y\y\6\g\z\i\h\2\a\j\v\y\g\t\j\b\f\e\o\b\k\i\9\s\t\r\o\c\8\m\0\y\9\l\0\f\h\9\o\n\5\z\i\w\8\w\v\v\1\m\b\m\7\3\5\a\m\q\d\g\5\9\y\5\5\a\a\9\9\8\c\9\3\x\q\q\u\9\2\h\m\6\q\4\s\k\l\3\w\g\z\j\2\u\e\0\m\f\q\9\w\x\g\p\c\9\4\t\t\i\y\y\8\k\1\j\5\9\f\e\f\t\p\o\m\4\c\n\3\h\1\a\x\3\t\w\o\l\b\q\r\h\q\q\o\o\c\o\q\b\g\p\y\5\s\6\w\k\d\9\m\c\1\d\k\d\g\e\x\c\m\s\1\c\2\g\z\w\g\9\w\n\q\2\8\5\5\q\0\7\6\i\d\m\r\2\b\t\z\6\6\i\e\x\j\d\p\p\g\5\j\6\8\1\v\8\d\w\c\4\n\m\0\p\c\0\6\d\9\8\t\j\8\r\m\e\s\1\a\6\a\v\t\c\b\6\v\j\k\i\t\g\6\b\q\9\a\e\8\u\c\8\l\t\y\c\v\d\e\5\7\e\l\j\e\d\2\x\q\n\3\3\h\5\h\c\3\p\n\2\9\z\b\p\d\u\m\x\6\n\0\g\i\7\y\w\s\t\d\k\1\i\0\k\3\o\u\1\p\7\6\9\t\z\k\g\i\j\u\d\3\w\6\s\j\w\g\0\5\x\m\u\z\b\a\j\9\z\p\7\b\f\l\l\p\4\m\5\1\b\t\0\h\w\s\b\0\w\u\f\9\l\t\2\u\c\w\s\d\g\u\r\q\7\e\j\u\j\l\t\f\p\i\l\p\7\7\f\r\r\v\7\8\7\f\e\c\m\n\b\b\0\o\6\0\r\w\g\0\g\e\0\e\x\6\6\z\w\k\r\i\h\0\2\j\c\l\9\t\3\9\i\0\4\7\5\0\t\z\s\m\h\9\v\8\j\q\1\x\g\q\j\3\j\q\r\h\u\n\e\t\n\u\s\j\b\w\7\u\o\b\z\f\t\m\z\i\3\s\g\z\i\d\y\x\l\p\5\4\n\2\p\8\a\r\s\t\q\e\a\2\t\k\4\l\l\g\m\j\6\q\a\y\5\0\6\u\8\s\8\x\x\6\n\4\d\f\m\s\h\l\e\e\4\r\i\v\3\4\5\b\b\a\s\z\w\o\t\z\1\n\b\1\r\n\v\j\m\y\b\o\r\i\4\r\u\3\k\f\w\u\m\c\h\a\4\j\3\9\f\m\e\o\8\3\x\g\f\3\2\q\9\2\n\p\o\f\k\m\r\6\q\1\8\6\x\g\k\y\7\v\j\r\e\3\f\8\a\d\l\c\j\t\r\q\7\t\h\h\e\1\3\a\z\r\7\4\z\y\6\p\6\z\i\b\d\o\s\n\v\l\m\8\b\1\4\u\c\y\4\h\d\5\m\6\k\r\0\s\9\o\5\4\3\6\j\0\n\f\e\1\y\i\o\f\u\1\h\f\j\c\2\z\p\3\a\6\k\n\l\9\3\a\1\a\2\l\c\v\a\s\z\p\g\t\o\8\i\5\u\b\o\b\a\e\8\q\m\v\3\4\b\k\j\x\g\g\l\4\5\s\l\i\z\l\e\t\u\5\b\3\g\y\y\h\3\b\4\w\j\q\t\k\h\k\j\y\7\z\s\9\d\x\q\u\n\9\7\j\8\b\d\s\y\u\h\f\o\5\m\j\e\t\u\t\u\h\j\i\q\y\d\g\l\8\c\d\u\w\z\2\c\u\8\o\8\p\l\m\q\8\d\l\1\8\0\w\o\8\n\a\y\i\n\c\8\p\m\5\b\d\5\c\6\2\n\f\5\6\3\a\m\r\a\j\0\p\n\r\4\x\3\n\1\y\r\v\q\9\p\u\q\b\h\q\z\g\k\a\8\n\u\s\9\5\u\b\s\9\z\5\s\i\1\a\l\q\v\9\s\n\0\i\c\b\z\2\b\r\4\c\l\8\j\e\a\x\1\t\4\a\r\n\t\y\l\p\2\y\g\0\8\g\u\e\p\v\4\5\0\o\3\c\9\b\3\4\n\i\p\c\x\v\y\e\m\f\k\g\q\l\1\s\z\a\1\p\8\z\6\i\b\z\p\1\q\r\t\d\9\k\n\h\l\2\e\u\y\r\n\t\d\1\n\d\k\e\u\v\g\9\f\j\e\l\j\u\j\5\5\a\d\w\z\j\s\3\v\r\6\a\1\x\7\h\h\n\v\m\g\g\x\h\m\r\2\a\j\s\2\6\8\0\r\i\4\q\s\v\5\m\e\y\b\x\1\v\d\u\f\1\6\2\p\9\b\x\u\g\4\0\u\2\v\k\2\m\w\5\x\a\y\s\5\a\x\i\k\e\o\q\2\x\a\g\d\g\l\8\5\l\y\n\r\h\g\w\d\y\e\u\s\8\h\g\b\7\d\j\u\6\0\d\n\n\9\l\1\0\h\t\2\h\r\5\8\0\d\f\3\v\1\o\t\i\8\z\s\j\3\8\o\e\x\s\l\8\r\y\j\r\o\z\i\l\1\2\t\z\w\e\a\p\d\4\6\n\g\0\h\4\g\b\y\i\l\8\y\w\y\w\3\3\j\c\i\o\r\h\r\x\s\s\y\f\u\m\t\d\c\w\x\a\j\8\4\r\x\a\v\p\g\o\9\1\x\y\b\j\c\m\q\o\8\g\3\k\t\i\r\f\c\r\a\w\t\d\v\d\e\7\h\c\v\a\n\x\2\c\z\s\h\b\g\3\g\r\g\8\o\b\r\b\w\5\c\q\b\i\3\8\n\e\z\g\d\5\r\m\y\g\b\z\s\w\7\a\s\v\u\o\g\0\p\3\r\k\j\f\x\j\f\r\f\v\g\5\n\9\a\0\s\0\s\r\2\x\v\u\7\h\w\c\w\0\i\z\p\h\j\7\x\j\k\y\9\d\m\m\e\d\1\6\s\1\e\x\v\p\u\5\j\m\q\q\0\8\0\r\z\v\y\m\l\0\6\j\2\p\n\7\b\c\7\6\a\p\i\j\0\j\b\e\p\l\v\1\0\w\z\i\i\4\r\t\f\n\c\3\3\h\r\w\0\c\1\p\7\f\m\o\p\o\n\m\p\s\3\6\a\1\1\c\l\k\7\d\9\w\m\1\k\u\a\3\8\9\a\i\l\0\n\3\c\i\u\i\t\y\p\p\1\4\x\3\i\j\i\5\v\0\o\m\f\i\w\v\v\k\f\d\q\3\9\y\l\t\u\w\6\q\4\n\8\c\f\y\g\c\i\7\h\3\j\y\c\5\5\9\q\2\4\t\3\w\x\n\8\n\z\m\x\9\7\d\q\m\8\e\p\q\8\b\v\h\4\a\f\h\3\d\w\f\4\v\a\8\h\w\h\b\h\8\x\q\u\n\3\m\a\4\g\3\w\4\1\g\b\5\u\2\a\p\6\e\h\i\m\z\f\5\6\u\1\o\a\s\0\h\s\v\w\z\w\4\b\i\3\6\x\q\g\m\b\l\7\9\o\q\n\m\t\y\c\e\d\8\5\n\8\y\t\7\u\8\c\n\l\5\d\h\q\c\s\j\9\w\n\c\x\p\g\w\4\q\p\m\4\f\x\m\b\d\m\2\w\r\i\h\5\z\v\q\3\m\q\v\1\e\m\o\i\9\9\u\u\f\f\d\9\y\4\v\9\9\v\0\g\c\x\y\p\o\o\x\8\z\w\p\j\0\4\y\c\f\5\9\0\f\7\9\0\l\2\v\k\l\n\m\6\3\c\l\b\s\v\e\z\i\w\n\j\7\j\l\v\v\m\c\t\2\e\m\1\y\b\l\4\p\e\a\5\o\a\r\1\1\l\n\s\x\d\3\u\g\8\i\f\m\6\r\h\u\x\m\4\3\g\p\6\e\8\e\8\8\b\4\j\p\v\y\k\6\s\p\d\9\c\g\n\a\d\j\8\w\d\t\9\f\9\i\n\2\b\q\p\e\2\9\m\a\h\w\n\n\8\l\v\e\j\h\7\8\i\k\w\3\u\r\i\z\d\i\8\a\n\o\4\3\c\0\8\x\d\d\v\e\v\p\e\t\k\v\1\k\4\r\t\u\9\j\m\t\s\j\b\5\o\1\a\x\y\3\l\l\f\6\u\z\3\y\1\j\8\5\8\k\e\5\5\r\l\g\q\d\h\a\u\s\t\g\l\p\p\a\2\t\j\r\q\y\4\w\h\l\t\9\3\4\g\0\4\f\9\a\t\6\u\9\w\0\g\e\p\7\t\j\h\s\i\1\y\m\g\t\a\t\6\m\n\d\9\n\o\5\j\y\g\y\m\5\c\2\v\9\4\r\u\2\y\j\n\i\0\4\q\5\k\r\o\q\o\g\4\9\0\0\6\2\7\a\x\i\4\h\c\f\c\e\t\n\x\r\5\t\l\o\g\i\k\2\z\y\3\h\q\k\e\n\h\c\t\n\w\k\v\f\6\6\7\4\2\e\w\u\6\m\u\w\1\k\5\q\0\u\1\6\c\e\e\o\g\v\q\3\9\n\2\n\d\8\l\0\q\l\3\8\s\0\p\h\l\8\w\o\n\b\4\p\s\0\x\b\2\6\h\e\p\5\9\x\j\t\5\r\9\e\a\5\k\z\g\r\v\5\k\4\a\2\8\m\4\7\q\f\7\i\r\r\n\p\z\f\e\j\l\1\u\l\5\q\r\n\w\4\n\2\w\t\u\7\w\i\c\5\s\o\a\5\g\q\s\h\b\k\h\x\1\m\2\s\2\s\r\g\o\d\c\u\c\o\j\v\4\z\3\8\h\4\3\1\v\o\b\n\l\6\4\y\t\7\x\h\h\u\h\i\s\4\n\o\z\8\l\w\n\o\z\g\q\8\x\k\3\w\c\k\y\0\g\1\2\m\2\3\b\c\a\s\p\6\w\j\f\5\y\2\b\4\l\t\w\y\j\w\x\h\e\q\q\l\e\i\s\l\1\u\z\b\k\j\k\5\5\f\z\i\m\9\0\f\2\k\j\b\f\6\p\m\j\l\3\l\t\f\7\j\y\t\a\8\m\s\k\w\b\b\6\2\9\6\h\z\e\r\k\1\7\c\r\3\f\q\z\y\c\k\p\6\u\f\9\0\8\7\o\q\d\5\k\e\m\v\r\j\q\u\c\e\x\i\v\u\d\8\z\s\c\p\k\8\e\2\1\6\m\s\l\x\d\4\n\3\1\a\a\0\d\x\6\b\s\u\c\p\d\1\g\7\k\3\l\r\f\k\1\m\b\y\l\x\s\0\n\4\9\1\f\8\3\o\8\w\4\q\1\e\2\g\6\4\q\y\t\p\6\6\y\3\m\1\5\q\5\t\n\o\0\6\b\u\i\a\r\9\k\0\8\l\h\r\8\r\o\j\6\w\f\h\t\s\3\q\a\s\y\0\t\p\8\9\u\k\7\d\j\d\g\j\u\a\s\j\5\e\7\t\0\6\z\q\a\7\6\z\v\7\y\n\5\y\v\q\j\v\q\j\l\5\r\3\q\c\2\5\a\7\q\6\i\p\r\q\d\h\8\8\b\j\8\f\i\q\9\1\e\o\q\1\u\3\j\y\n\t\q\a\1\z\t\1\b\9\3\q\t\r\b\e\n\5\5\u\e\n\a\m\o\8\n\z\g\3\n\8\6\w\x\7\7\x\e\n\i\1\c\4\g\m\o\y\4\8\e\v\7\j\w\0\8\d\7\6\z\8\z\7\l\d\p\9\v\t\6\3\4\7\f\d\7\w\h\3\2\g\l\e\f ]] 00:26:20.546 00:26:20.546 real 0m2.220s 00:26:20.546 user 0m1.349s 00:26:20.546 sys 0m0.717s 00:26:20.546 16:43:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.546 16:43:51 -- common/autotest_common.sh@10 -- # set +x 00:26:20.546 16:43:51 -- dd/basic_rw.sh@1 -- # cleanup 00:26:20.546 16:43:51 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:26:20.546 16:43:51 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:20.546 16:43:51 -- dd/common.sh@11 -- # local nvme_ref= 00:26:20.546 16:43:51 -- dd/common.sh@12 -- # local size=0xffff 00:26:20.546 16:43:51 -- dd/common.sh@14 -- # local bs=1048576 00:26:20.546 16:43:51 -- dd/common.sh@15 -- # local count=1 00:26:20.546 16:43:51 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:20.546 16:43:51 -- dd/common.sh@18 -- # gen_conf 00:26:20.546 16:43:51 -- dd/common.sh@31 -- # xtrace_disable 00:26:20.546 16:43:51 -- common/autotest_common.sh@10 -- # set +x 00:26:20.546 { 00:26:20.546 "subsystems": [ 00:26:20.546 { 00:26:20.546 "subsystem": "bdev", 00:26:20.546 "config": [ 00:26:20.546 { 00:26:20.546 "params": { 00:26:20.546 "trtype": "pcie", 00:26:20.546 "traddr": "0000:00:06.0", 00:26:20.546 "name": "Nvme0" 00:26:20.546 }, 00:26:20.546 "method": "bdev_nvme_attach_controller" 00:26:20.546 }, 00:26:20.546 { 00:26:20.546 "method": "bdev_wait_for_examine" 00:26:20.546 } 00:26:20.546 ] 00:26:20.546 } 00:26:20.546 ] 00:26:20.546 } 00:26:20.546 [2024-07-13 16:43:51.953501] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:20.546 [2024-07-13 16:43:51.953695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144223 ] 00:26:20.804 [2024-07-13 16:43:52.096248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.804 [2024-07-13 16:43:52.162966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.321  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:21.321 00:26:21.321 16:43:52 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:21.321 00:26:21.321 real 0m24.367s 00:26:21.321 user 0m15.394s 00:26:21.321 sys 0m7.181s 00:26:21.321 16:43:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.321 16:43:52 -- common/autotest_common.sh@10 -- # set +x 00:26:21.321 ************************************ 00:26:21.321 END TEST spdk_dd_basic_rw 00:26:21.322 ************************************ 00:26:21.677 16:43:52 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:26:21.677 16:43:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.677 16:43:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.677 16:43:52 -- common/autotest_common.sh@10 -- # set +x 00:26:21.677 ************************************ 00:26:21.677 START TEST spdk_dd_posix 00:26:21.677 ************************************ 00:26:21.677 16:43:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:26:21.677 * Looking for test storage... 00:26:21.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:21.677 16:43:52 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:21.677 16:43:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.677 16:43:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.677 16:43:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.677 16:43:52 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:21.678 16:43:52 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:21.678 16:43:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:21.678 16:43:52 -- paths/export.sh@5 -- # export PATH 00:26:21.678 16:43:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:21.678 16:43:52 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:26:21.678 16:43:52 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:26:21.678 16:43:52 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:26:21.678 16:43:52 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:26:21.678 16:43:52 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:21.678 16:43:52 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:21.678 16:43:52 -- dd/posix.sh@130 -- # tests 00:26:21.678 16:43:52 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:26:21.678 * First test run, using AIO 00:26:21.678 16:43:52 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:26:21.678 16:43:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.678 16:43:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.678 16:43:52 -- common/autotest_common.sh@10 -- # set +x 00:26:21.678 ************************************ 00:26:21.678 START TEST dd_flag_append 00:26:21.678 ************************************ 00:26:21.678 16:43:52 -- common/autotest_common.sh@1104 -- # append 00:26:21.678 16:43:52 -- dd/posix.sh@16 -- # local dump0 00:26:21.678 16:43:52 -- dd/posix.sh@17 -- # local dump1 00:26:21.678 16:43:52 -- dd/posix.sh@19 -- # gen_bytes 32 00:26:21.678 16:43:52 -- dd/common.sh@98 -- # xtrace_disable 00:26:21.678 16:43:52 -- common/autotest_common.sh@10 -- # set +x 00:26:21.678 16:43:52 -- dd/posix.sh@19 -- # dump0=bgps9bwxcgvk3ptv7tugcs569t302ou0 00:26:21.678 16:43:52 -- dd/posix.sh@20 -- # gen_bytes 32 00:26:21.678 16:43:52 -- dd/common.sh@98 -- # xtrace_disable 00:26:21.678 16:43:52 -- common/autotest_common.sh@10 -- # set +x 00:26:21.678 16:43:52 -- dd/posix.sh@20 -- # dump1=cd3frnico8y11gqmkeasks53l04evjup 00:26:21.678 16:43:52 -- dd/posix.sh@22 -- # printf %s bgps9bwxcgvk3ptv7tugcs569t302ou0 00:26:21.678 16:43:52 -- dd/posix.sh@23 -- # printf %s cd3frnico8y11gqmkeasks53l04evjup 00:26:21.678 16:43:52 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:26:21.678 [2024-07-13 16:43:53.039361] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:21.678 [2024-07-13 16:43:53.039628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144292 ] 00:26:21.937 [2024-07-13 16:43:53.193125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.937 [2024-07-13 16:43:53.262433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.508  Copying: 32/32 [B] (average 31 kBps) 00:26:22.508 00:26:22.508 16:43:53 -- dd/posix.sh@27 -- # [[ cd3frnico8y11gqmkeasks53l04evjupbgps9bwxcgvk3ptv7tugcs569t302ou0 == \c\d\3\f\r\n\i\c\o\8\y\1\1\g\q\m\k\e\a\s\k\s\5\3\l\0\4\e\v\j\u\p\b\g\p\s\9\b\w\x\c\g\v\k\3\p\t\v\7\t\u\g\c\s\5\6\9\t\3\0\2\o\u\0 ]] 00:26:22.508 00:26:22.508 real 0m0.829s 00:26:22.508 user 0m0.444s 00:26:22.508 sys 0m0.246s 00:26:22.508 ************************************ 00:26:22.508 END TEST dd_flag_append 00:26:22.508 ************************************ 00:26:22.508 16:43:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.508 16:43:53 -- common/autotest_common.sh@10 -- # set +x 00:26:22.508 16:43:53 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:26:22.508 16:43:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:22.508 16:43:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:22.508 16:43:53 -- common/autotest_common.sh@10 -- # set +x 00:26:22.508 ************************************ 00:26:22.508 START TEST dd_flag_directory 00:26:22.508 ************************************ 00:26:22.508 16:43:53 -- common/autotest_common.sh@1104 -- # directory 00:26:22.508 16:43:53 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:22.508 16:43:53 -- common/autotest_common.sh@640 -- # local es=0 00:26:22.508 16:43:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:22.508 16:43:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.508 16:43:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.508 16:43:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.508 16:43:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.508 16:43:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.508 16:43:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:22.508 16:43:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:22.508 16:43:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:22.508 16:43:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:22.508 [2024-07-13 16:43:53.936354] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:22.508 [2024-07-13 16:43:53.937126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144334 ] 00:26:22.765 [2024-07-13 16:43:54.092280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.765 [2024-07-13 16:43:54.168754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.023 [2024-07-13 16:43:54.283685] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:23.023 [2024-07-13 16:43:54.283787] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:23.023 [2024-07-13 16:43:54.283825] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:23.023 [2024-07-13 16:43:54.463523] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:23.281 16:43:54 -- common/autotest_common.sh@643 -- # es=236 00:26:23.281 16:43:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:23.281 16:43:54 -- common/autotest_common.sh@652 -- # es=108 00:26:23.281 16:43:54 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:23.281 16:43:54 -- common/autotest_common.sh@660 -- # es=1 00:26:23.281 16:43:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:23.281 16:43:54 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:23.281 16:43:54 -- common/autotest_common.sh@640 -- # local es=0 00:26:23.281 16:43:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:23.281 16:43:54 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.281 16:43:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.281 16:43:54 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.281 16:43:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.281 16:43:54 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.281 16:43:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:23.281 16:43:54 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:23.281 16:43:54 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:23.281 16:43:54 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:23.281 [2024-07-13 16:43:54.744445] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:23.281 [2024-07-13 16:43:54.744718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144351 ] 00:26:23.540 [2024-07-13 16:43:54.900124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.540 [2024-07-13 16:43:54.969166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.799 [2024-07-13 16:43:55.084896] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:23.799 [2024-07-13 16:43:55.084993] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:23.799 [2024-07-13 16:43:55.085025] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:23.799 [2024-07-13 16:43:55.265842] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:24.057 16:43:55 -- common/autotest_common.sh@643 -- # es=236 00:26:24.057 16:43:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:24.057 16:43:55 -- common/autotest_common.sh@652 -- # es=108 00:26:24.057 16:43:55 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:24.057 16:43:55 -- common/autotest_common.sh@660 -- # es=1 00:26:24.057 16:43:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:24.057 00:26:24.057 real 0m1.602s 00:26:24.057 user 0m0.868s 00:26:24.057 sys 0m0.534s 00:26:24.057 16:43:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.057 ************************************ 00:26:24.057 END TEST dd_flag_directory 00:26:24.057 ************************************ 00:26:24.057 16:43:55 -- common/autotest_common.sh@10 -- # set +x 00:26:24.316 16:43:55 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:26:24.316 16:43:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:24.316 16:43:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:24.316 16:43:55 -- common/autotest_common.sh@10 -- # set +x 00:26:24.316 ************************************ 00:26:24.316 START TEST dd_flag_nofollow 00:26:24.316 ************************************ 00:26:24.316 16:43:55 -- common/autotest_common.sh@1104 -- # nofollow 00:26:24.316 16:43:55 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:24.316 16:43:55 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:24.316 16:43:55 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:24.316 16:43:55 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:24.316 16:43:55 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:24.316 16:43:55 -- common/autotest_common.sh@640 -- # local es=0 00:26:24.316 16:43:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:24.316 16:43:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:24.316 16:43:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:24.316 16:43:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:24.316 16:43:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:24.316 16:43:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:24.316 16:43:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:24.316 16:43:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:24.316 16:43:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:24.316 16:43:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:24.316 [2024-07-13 16:43:55.624729] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:24.316 [2024-07-13 16:43:55.625547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144382 ] 00:26:24.316 [2024-07-13 16:43:55.780354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.574 [2024-07-13 16:43:55.849200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.575 [2024-07-13 16:43:55.963719] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:24.575 [2024-07-13 16:43:55.963820] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:24.575 [2024-07-13 16:43:55.963864] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:24.833 [2024-07-13 16:43:56.144436] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:25.092 16:43:56 -- common/autotest_common.sh@643 -- # es=216 00:26:25.092 16:43:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:25.092 16:43:56 -- common/autotest_common.sh@652 -- # es=88 00:26:25.092 16:43:56 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:25.092 16:43:56 -- common/autotest_common.sh@660 -- # es=1 00:26:25.092 16:43:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:25.092 16:43:56 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:25.092 16:43:56 -- common/autotest_common.sh@640 -- # local es=0 00:26:25.092 16:43:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:25.092 16:43:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:25.092 16:43:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:25.092 16:43:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:25.092 16:43:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:25.092 16:43:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:25.092 16:43:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:25.092 16:43:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:25.092 16:43:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:25.092 16:43:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:25.092 [2024-07-13 16:43:56.415275] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:25.092 [2024-07-13 16:43:56.415546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144402 ] 00:26:25.351 [2024-07-13 16:43:56.568802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.351 [2024-07-13 16:43:56.642400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.351 [2024-07-13 16:43:56.758586] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:25.351 [2024-07-13 16:43:56.758688] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:25.351 [2024-07-13 16:43:56.758720] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:25.610 [2024-07-13 16:43:56.938656] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:25.870 16:43:57 -- common/autotest_common.sh@643 -- # es=216 00:26:25.870 16:43:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:25.870 16:43:57 -- common/autotest_common.sh@652 -- # es=88 00:26:25.870 16:43:57 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:25.870 16:43:57 -- common/autotest_common.sh@660 -- # es=1 00:26:25.870 16:43:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:25.870 16:43:57 -- dd/posix.sh@46 -- # gen_bytes 512 00:26:25.870 16:43:57 -- dd/common.sh@98 -- # xtrace_disable 00:26:25.870 16:43:57 -- common/autotest_common.sh@10 -- # set +x 00:26:25.870 16:43:57 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:25.870 [2024-07-13 16:43:57.223721] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:25.870 [2024-07-13 16:43:57.223965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144413 ] 00:26:26.130 [2024-07-13 16:43:57.380027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.130 [2024-07-13 16:43:57.473541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.700  Copying: 512/512 [B] (average 500 kBps) 00:26:26.700 00:26:26.700 16:43:58 -- dd/posix.sh@49 -- # [[ wa7r1cesrpm6v47szz8kg7hv8s2cxpzw9ffg63h9tcqx7b4rd2zcmdiu9xbwj1apnc0x8fmszdzoyhhmrjn6xlmm27ajysjwkw57yrc7qlyla46ss9ry5iv2aqx5xz4qlxwebutjls3oygij103qz1t4nhorg3hetfk2lnrdhhxcgg77zhrt89bxd5fy5y57heifjd04bw46i5rawp3e5g80y89uy23lpo7qdvwn5rlbambk7vvveid4xzyk9ow7b3qzzfc0y9idvao5cc2gge7s6603wf0zlqasulsea5xqopa0xa51ds9gufr0ujql5ykcpowxzl5qkl2jgsvtb8lkjf0x4yam25jsr4ac99ad52ytu57zay6splc0zfuy7ku9x1ylt5b72ap9i1c6cec4s0vajmtzg5usciqoz82z46idw0v1gyh60j0w9oj3qh70lm916as0jq945vrvf429k49fh6kyb3q01t5u9yl6le8h2qb87n8bznfipzb8 == \w\a\7\r\1\c\e\s\r\p\m\6\v\4\7\s\z\z\8\k\g\7\h\v\8\s\2\c\x\p\z\w\9\f\f\g\6\3\h\9\t\c\q\x\7\b\4\r\d\2\z\c\m\d\i\u\9\x\b\w\j\1\a\p\n\c\0\x\8\f\m\s\z\d\z\o\y\h\h\m\r\j\n\6\x\l\m\m\2\7\a\j\y\s\j\w\k\w\5\7\y\r\c\7\q\l\y\l\a\4\6\s\s\9\r\y\5\i\v\2\a\q\x\5\x\z\4\q\l\x\w\e\b\u\t\j\l\s\3\o\y\g\i\j\1\0\3\q\z\1\t\4\n\h\o\r\g\3\h\e\t\f\k\2\l\n\r\d\h\h\x\c\g\g\7\7\z\h\r\t\8\9\b\x\d\5\f\y\5\y\5\7\h\e\i\f\j\d\0\4\b\w\4\6\i\5\r\a\w\p\3\e\5\g\8\0\y\8\9\u\y\2\3\l\p\o\7\q\d\v\w\n\5\r\l\b\a\m\b\k\7\v\v\v\e\i\d\4\x\z\y\k\9\o\w\7\b\3\q\z\z\f\c\0\y\9\i\d\v\a\o\5\c\c\2\g\g\e\7\s\6\6\0\3\w\f\0\z\l\q\a\s\u\l\s\e\a\5\x\q\o\p\a\0\x\a\5\1\d\s\9\g\u\f\r\0\u\j\q\l\5\y\k\c\p\o\w\x\z\l\5\q\k\l\2\j\g\s\v\t\b\8\l\k\j\f\0\x\4\y\a\m\2\5\j\s\r\4\a\c\9\9\a\d\5\2\y\t\u\5\7\z\a\y\6\s\p\l\c\0\z\f\u\y\7\k\u\9\x\1\y\l\t\5\b\7\2\a\p\9\i\1\c\6\c\e\c\4\s\0\v\a\j\m\t\z\g\5\u\s\c\i\q\o\z\8\2\z\4\6\i\d\w\0\v\1\g\y\h\6\0\j\0\w\9\o\j\3\q\h\7\0\l\m\9\1\6\a\s\0\j\q\9\4\5\v\r\v\f\4\2\9\k\4\9\f\h\6\k\y\b\3\q\0\1\t\5\u\9\y\l\6\l\e\8\h\2\q\b\8\7\n\8\b\z\n\f\i\p\z\b\8 ]] 00:26:26.700 00:26:26.700 real 0m2.486s 00:26:26.700 user 0m1.295s 00:26:26.700 sys 0m0.825s 00:26:26.700 ************************************ 00:26:26.700 END TEST dd_flag_nofollow 00:26:26.700 16:43:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:26.700 16:43:58 -- common/autotest_common.sh@10 -- # set +x 00:26:26.700 ************************************ 00:26:26.700 16:43:58 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:26:26.700 16:43:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:26.700 16:43:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:26.700 16:43:58 -- common/autotest_common.sh@10 -- # set +x 00:26:26.700 ************************************ 00:26:26.700 START TEST dd_flag_noatime 00:26:26.700 ************************************ 00:26:26.700 16:43:58 -- common/autotest_common.sh@1104 -- # noatime 00:26:26.700 16:43:58 -- dd/posix.sh@53 -- # local atime_if 00:26:26.700 16:43:58 -- dd/posix.sh@54 -- # local atime_of 00:26:26.700 16:43:58 -- dd/posix.sh@58 -- # gen_bytes 512 00:26:26.700 16:43:58 -- dd/common.sh@98 -- # xtrace_disable 00:26:26.700 16:43:58 -- common/autotest_common.sh@10 -- # set +x 00:26:26.700 16:43:58 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:26.700 16:43:58 -- dd/posix.sh@60 -- # atime_if=1720889037 00:26:26.700 16:43:58 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:26.700 16:43:58 -- dd/posix.sh@61 -- # atime_of=1720889038 00:26:26.700 16:43:58 -- dd/posix.sh@66 -- # sleep 1 00:26:28.080 16:43:59 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:28.080 [2024-07-13 16:43:59.192756] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:28.080 [2024-07-13 16:43:59.193008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144470 ] 00:26:28.080 [2024-07-13 16:43:59.352876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.080 [2024-07-13 16:43:59.430805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.649  Copying: 512/512 [B] (average 500 kBps) 00:26:28.649 00:26:28.649 16:43:59 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:28.649 16:43:59 -- dd/posix.sh@69 -- # (( atime_if == 1720889037 )) 00:26:28.649 16:43:59 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:28.649 16:43:59 -- dd/posix.sh@70 -- # (( atime_of == 1720889038 )) 00:26:28.649 16:43:59 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:28.649 [2024-07-13 16:44:00.035319] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:28.649 [2024-07-13 16:44:00.035510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144488 ] 00:26:28.909 [2024-07-13 16:44:00.175850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.909 [2024-07-13 16:44:00.245329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.476  Copying: 512/512 [B] (average 500 kBps) 00:26:29.476 00:26:29.476 16:44:00 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:29.476 16:44:00 -- dd/posix.sh@73 -- # (( atime_if < 1720889040 )) 00:26:29.476 00:26:29.476 real 0m2.687s 00:26:29.476 user 0m0.863s 00:26:29.476 sys 0m0.561s 00:26:29.476 ************************************ 00:26:29.476 END TEST dd_flag_noatime 00:26:29.476 ************************************ 00:26:29.476 16:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.476 16:44:00 -- common/autotest_common.sh@10 -- # set +x 00:26:29.476 16:44:00 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:26:29.476 16:44:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:29.476 16:44:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:29.476 16:44:00 -- common/autotest_common.sh@10 -- # set +x 00:26:29.476 ************************************ 00:26:29.476 START TEST dd_flags_misc 00:26:29.476 ************************************ 00:26:29.476 16:44:00 -- common/autotest_common.sh@1104 -- # io 00:26:29.476 16:44:00 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:26:29.476 16:44:00 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:26:29.476 16:44:00 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:26:29.476 16:44:00 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:29.476 16:44:00 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:29.476 16:44:00 -- dd/common.sh@98 -- # xtrace_disable 00:26:29.476 16:44:00 -- common/autotest_common.sh@10 -- # set +x 00:26:29.476 16:44:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:29.477 16:44:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:29.477 [2024-07-13 16:44:00.915685] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:29.477 [2024-07-13 16:44:00.915875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144520 ] 00:26:29.735 [2024-07-13 16:44:01.057805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.735 [2024-07-13 16:44:01.126442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.254  Copying: 512/512 [B] (average 500 kBps) 00:26:30.254 00:26:30.254 16:44:01 -- dd/posix.sh@93 -- # [[ v65raaiezlzfgl20141e0tpfgeno0f0shqg8emlte4eb2y2vhwzdxn7gyjqcwogdzkg8qxldlh6u9r2fjzggh6606tgaptj0deln3k2stlp5fjcdg8au12w9jrdyqomrge648h4kaozruicpajrdckt9af0mm8diqam2t77krenc33a0cplaa807v2lc9si0itdi96fsqbv1gctfga1nf1p6x5brs6h24uptziv5oj1ef5k4w38897thga2etn0kohw3crc4sndofu8xcqu4zgnixxgbg2vkk5iw0vohpn91kxv444x8kiomxyjbcu8i3l0l4sqqgzmbkuan8zg6byznwfcv9va0zooevsfbqgpc34bon94dpn83xv9ij5m02e0ddivwz8o9v4k9iqpc87darrdrubvgr4eoss4e0gtr5kscjuduifd74vjxliq8x7pi40qiajwrz4pphb12v1r6kpicohdqd4g6l9bpcg508zyvuk01355607ayrqhh == \v\6\5\r\a\a\i\e\z\l\z\f\g\l\2\0\1\4\1\e\0\t\p\f\g\e\n\o\0\f\0\s\h\q\g\8\e\m\l\t\e\4\e\b\2\y\2\v\h\w\z\d\x\n\7\g\y\j\q\c\w\o\g\d\z\k\g\8\q\x\l\d\l\h\6\u\9\r\2\f\j\z\g\g\h\6\6\0\6\t\g\a\p\t\j\0\d\e\l\n\3\k\2\s\t\l\p\5\f\j\c\d\g\8\a\u\1\2\w\9\j\r\d\y\q\o\m\r\g\e\6\4\8\h\4\k\a\o\z\r\u\i\c\p\a\j\r\d\c\k\t\9\a\f\0\m\m\8\d\i\q\a\m\2\t\7\7\k\r\e\n\c\3\3\a\0\c\p\l\a\a\8\0\7\v\2\l\c\9\s\i\0\i\t\d\i\9\6\f\s\q\b\v\1\g\c\t\f\g\a\1\n\f\1\p\6\x\5\b\r\s\6\h\2\4\u\p\t\z\i\v\5\o\j\1\e\f\5\k\4\w\3\8\8\9\7\t\h\g\a\2\e\t\n\0\k\o\h\w\3\c\r\c\4\s\n\d\o\f\u\8\x\c\q\u\4\z\g\n\i\x\x\g\b\g\2\v\k\k\5\i\w\0\v\o\h\p\n\9\1\k\x\v\4\4\4\x\8\k\i\o\m\x\y\j\b\c\u\8\i\3\l\0\l\4\s\q\q\g\z\m\b\k\u\a\n\8\z\g\6\b\y\z\n\w\f\c\v\9\v\a\0\z\o\o\e\v\s\f\b\q\g\p\c\3\4\b\o\n\9\4\d\p\n\8\3\x\v\9\i\j\5\m\0\2\e\0\d\d\i\v\w\z\8\o\9\v\4\k\9\i\q\p\c\8\7\d\a\r\r\d\r\u\b\v\g\r\4\e\o\s\s\4\e\0\g\t\r\5\k\s\c\j\u\d\u\i\f\d\7\4\v\j\x\l\i\q\8\x\7\p\i\4\0\q\i\a\j\w\r\z\4\p\p\h\b\1\2\v\1\r\6\k\p\i\c\o\h\d\q\d\4\g\6\l\9\b\p\c\g\5\0\8\z\y\v\u\k\0\1\3\5\5\6\0\7\a\y\r\q\h\h ]] 00:26:30.254 16:44:01 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:30.254 16:44:01 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:30.513 [2024-07-13 16:44:01.740189] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:30.513 [2024-07-13 16:44:01.740483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144532 ] 00:26:30.513 [2024-07-13 16:44:01.896030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.513 [2024-07-13 16:44:01.964723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.030  Copying: 512/512 [B] (average 500 kBps) 00:26:31.030 00:26:31.290 16:44:02 -- dd/posix.sh@93 -- # [[ v65raaiezlzfgl20141e0tpfgeno0f0shqg8emlte4eb2y2vhwzdxn7gyjqcwogdzkg8qxldlh6u9r2fjzggh6606tgaptj0deln3k2stlp5fjcdg8au12w9jrdyqomrge648h4kaozruicpajrdckt9af0mm8diqam2t77krenc33a0cplaa807v2lc9si0itdi96fsqbv1gctfga1nf1p6x5brs6h24uptziv5oj1ef5k4w38897thga2etn0kohw3crc4sndofu8xcqu4zgnixxgbg2vkk5iw0vohpn91kxv444x8kiomxyjbcu8i3l0l4sqqgzmbkuan8zg6byznwfcv9va0zooevsfbqgpc34bon94dpn83xv9ij5m02e0ddivwz8o9v4k9iqpc87darrdrubvgr4eoss4e0gtr5kscjuduifd74vjxliq8x7pi40qiajwrz4pphb12v1r6kpicohdqd4g6l9bpcg508zyvuk01355607ayrqhh == \v\6\5\r\a\a\i\e\z\l\z\f\g\l\2\0\1\4\1\e\0\t\p\f\g\e\n\o\0\f\0\s\h\q\g\8\e\m\l\t\e\4\e\b\2\y\2\v\h\w\z\d\x\n\7\g\y\j\q\c\w\o\g\d\z\k\g\8\q\x\l\d\l\h\6\u\9\r\2\f\j\z\g\g\h\6\6\0\6\t\g\a\p\t\j\0\d\e\l\n\3\k\2\s\t\l\p\5\f\j\c\d\g\8\a\u\1\2\w\9\j\r\d\y\q\o\m\r\g\e\6\4\8\h\4\k\a\o\z\r\u\i\c\p\a\j\r\d\c\k\t\9\a\f\0\m\m\8\d\i\q\a\m\2\t\7\7\k\r\e\n\c\3\3\a\0\c\p\l\a\a\8\0\7\v\2\l\c\9\s\i\0\i\t\d\i\9\6\f\s\q\b\v\1\g\c\t\f\g\a\1\n\f\1\p\6\x\5\b\r\s\6\h\2\4\u\p\t\z\i\v\5\o\j\1\e\f\5\k\4\w\3\8\8\9\7\t\h\g\a\2\e\t\n\0\k\o\h\w\3\c\r\c\4\s\n\d\o\f\u\8\x\c\q\u\4\z\g\n\i\x\x\g\b\g\2\v\k\k\5\i\w\0\v\o\h\p\n\9\1\k\x\v\4\4\4\x\8\k\i\o\m\x\y\j\b\c\u\8\i\3\l\0\l\4\s\q\q\g\z\m\b\k\u\a\n\8\z\g\6\b\y\z\n\w\f\c\v\9\v\a\0\z\o\o\e\v\s\f\b\q\g\p\c\3\4\b\o\n\9\4\d\p\n\8\3\x\v\9\i\j\5\m\0\2\e\0\d\d\i\v\w\z\8\o\9\v\4\k\9\i\q\p\c\8\7\d\a\r\r\d\r\u\b\v\g\r\4\e\o\s\s\4\e\0\g\t\r\5\k\s\c\j\u\d\u\i\f\d\7\4\v\j\x\l\i\q\8\x\7\p\i\4\0\q\i\a\j\w\r\z\4\p\p\h\b\1\2\v\1\r\6\k\p\i\c\o\h\d\q\d\4\g\6\l\9\b\p\c\g\5\0\8\z\y\v\u\k\0\1\3\5\5\6\0\7\a\y\r\q\h\h ]] 00:26:31.290 16:44:02 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:31.290 16:44:02 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:31.290 [2024-07-13 16:44:02.574738] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:31.290 [2024-07-13 16:44:02.575731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144550 ] 00:26:31.290 [2024-07-13 16:44:02.736122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.552 [2024-07-13 16:44:02.812420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.120  Copying: 512/512 [B] (average 62 kBps) 00:26:32.120 00:26:32.120 16:44:03 -- dd/posix.sh@93 -- # [[ v65raaiezlzfgl20141e0tpfgeno0f0shqg8emlte4eb2y2vhwzdxn7gyjqcwogdzkg8qxldlh6u9r2fjzggh6606tgaptj0deln3k2stlp5fjcdg8au12w9jrdyqomrge648h4kaozruicpajrdckt9af0mm8diqam2t77krenc33a0cplaa807v2lc9si0itdi96fsqbv1gctfga1nf1p6x5brs6h24uptziv5oj1ef5k4w38897thga2etn0kohw3crc4sndofu8xcqu4zgnixxgbg2vkk5iw0vohpn91kxv444x8kiomxyjbcu8i3l0l4sqqgzmbkuan8zg6byznwfcv9va0zooevsfbqgpc34bon94dpn83xv9ij5m02e0ddivwz8o9v4k9iqpc87darrdrubvgr4eoss4e0gtr5kscjuduifd74vjxliq8x7pi40qiajwrz4pphb12v1r6kpicohdqd4g6l9bpcg508zyvuk01355607ayrqhh == \v\6\5\r\a\a\i\e\z\l\z\f\g\l\2\0\1\4\1\e\0\t\p\f\g\e\n\o\0\f\0\s\h\q\g\8\e\m\l\t\e\4\e\b\2\y\2\v\h\w\z\d\x\n\7\g\y\j\q\c\w\o\g\d\z\k\g\8\q\x\l\d\l\h\6\u\9\r\2\f\j\z\g\g\h\6\6\0\6\t\g\a\p\t\j\0\d\e\l\n\3\k\2\s\t\l\p\5\f\j\c\d\g\8\a\u\1\2\w\9\j\r\d\y\q\o\m\r\g\e\6\4\8\h\4\k\a\o\z\r\u\i\c\p\a\j\r\d\c\k\t\9\a\f\0\m\m\8\d\i\q\a\m\2\t\7\7\k\r\e\n\c\3\3\a\0\c\p\l\a\a\8\0\7\v\2\l\c\9\s\i\0\i\t\d\i\9\6\f\s\q\b\v\1\g\c\t\f\g\a\1\n\f\1\p\6\x\5\b\r\s\6\h\2\4\u\p\t\z\i\v\5\o\j\1\e\f\5\k\4\w\3\8\8\9\7\t\h\g\a\2\e\t\n\0\k\o\h\w\3\c\r\c\4\s\n\d\o\f\u\8\x\c\q\u\4\z\g\n\i\x\x\g\b\g\2\v\k\k\5\i\w\0\v\o\h\p\n\9\1\k\x\v\4\4\4\x\8\k\i\o\m\x\y\j\b\c\u\8\i\3\l\0\l\4\s\q\q\g\z\m\b\k\u\a\n\8\z\g\6\b\y\z\n\w\f\c\v\9\v\a\0\z\o\o\e\v\s\f\b\q\g\p\c\3\4\b\o\n\9\4\d\p\n\8\3\x\v\9\i\j\5\m\0\2\e\0\d\d\i\v\w\z\8\o\9\v\4\k\9\i\q\p\c\8\7\d\a\r\r\d\r\u\b\v\g\r\4\e\o\s\s\4\e\0\g\t\r\5\k\s\c\j\u\d\u\i\f\d\7\4\v\j\x\l\i\q\8\x\7\p\i\4\0\q\i\a\j\w\r\z\4\p\p\h\b\1\2\v\1\r\6\k\p\i\c\o\h\d\q\d\4\g\6\l\9\b\p\c\g\5\0\8\z\y\v\u\k\0\1\3\5\5\6\0\7\a\y\r\q\h\h ]] 00:26:32.120 16:44:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:32.120 16:44:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:32.120 [2024-07-13 16:44:03.432098] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:32.120 [2024-07-13 16:44:03.432398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144563 ] 00:26:32.120 [2024-07-13 16:44:03.588424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.379 [2024-07-13 16:44:03.657803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.946  Copying: 512/512 [B] (average 250 kBps) 00:26:32.946 00:26:32.946 16:44:04 -- dd/posix.sh@93 -- # [[ v65raaiezlzfgl20141e0tpfgeno0f0shqg8emlte4eb2y2vhwzdxn7gyjqcwogdzkg8qxldlh6u9r2fjzggh6606tgaptj0deln3k2stlp5fjcdg8au12w9jrdyqomrge648h4kaozruicpajrdckt9af0mm8diqam2t77krenc33a0cplaa807v2lc9si0itdi96fsqbv1gctfga1nf1p6x5brs6h24uptziv5oj1ef5k4w38897thga2etn0kohw3crc4sndofu8xcqu4zgnixxgbg2vkk5iw0vohpn91kxv444x8kiomxyjbcu8i3l0l4sqqgzmbkuan8zg6byznwfcv9va0zooevsfbqgpc34bon94dpn83xv9ij5m02e0ddivwz8o9v4k9iqpc87darrdrubvgr4eoss4e0gtr5kscjuduifd74vjxliq8x7pi40qiajwrz4pphb12v1r6kpicohdqd4g6l9bpcg508zyvuk01355607ayrqhh == \v\6\5\r\a\a\i\e\z\l\z\f\g\l\2\0\1\4\1\e\0\t\p\f\g\e\n\o\0\f\0\s\h\q\g\8\e\m\l\t\e\4\e\b\2\y\2\v\h\w\z\d\x\n\7\g\y\j\q\c\w\o\g\d\z\k\g\8\q\x\l\d\l\h\6\u\9\r\2\f\j\z\g\g\h\6\6\0\6\t\g\a\p\t\j\0\d\e\l\n\3\k\2\s\t\l\p\5\f\j\c\d\g\8\a\u\1\2\w\9\j\r\d\y\q\o\m\r\g\e\6\4\8\h\4\k\a\o\z\r\u\i\c\p\a\j\r\d\c\k\t\9\a\f\0\m\m\8\d\i\q\a\m\2\t\7\7\k\r\e\n\c\3\3\a\0\c\p\l\a\a\8\0\7\v\2\l\c\9\s\i\0\i\t\d\i\9\6\f\s\q\b\v\1\g\c\t\f\g\a\1\n\f\1\p\6\x\5\b\r\s\6\h\2\4\u\p\t\z\i\v\5\o\j\1\e\f\5\k\4\w\3\8\8\9\7\t\h\g\a\2\e\t\n\0\k\o\h\w\3\c\r\c\4\s\n\d\o\f\u\8\x\c\q\u\4\z\g\n\i\x\x\g\b\g\2\v\k\k\5\i\w\0\v\o\h\p\n\9\1\k\x\v\4\4\4\x\8\k\i\o\m\x\y\j\b\c\u\8\i\3\l\0\l\4\s\q\q\g\z\m\b\k\u\a\n\8\z\g\6\b\y\z\n\w\f\c\v\9\v\a\0\z\o\o\e\v\s\f\b\q\g\p\c\3\4\b\o\n\9\4\d\p\n\8\3\x\v\9\i\j\5\m\0\2\e\0\d\d\i\v\w\z\8\o\9\v\4\k\9\i\q\p\c\8\7\d\a\r\r\d\r\u\b\v\g\r\4\e\o\s\s\4\e\0\g\t\r\5\k\s\c\j\u\d\u\i\f\d\7\4\v\j\x\l\i\q\8\x\7\p\i\4\0\q\i\a\j\w\r\z\4\p\p\h\b\1\2\v\1\r\6\k\p\i\c\o\h\d\q\d\4\g\6\l\9\b\p\c\g\5\0\8\z\y\v\u\k\0\1\3\5\5\6\0\7\a\y\r\q\h\h ]] 00:26:32.946 16:44:04 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:32.946 16:44:04 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:32.946 16:44:04 -- dd/common.sh@98 -- # xtrace_disable 00:26:32.946 16:44:04 -- common/autotest_common.sh@10 -- # set +x 00:26:32.946 16:44:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:32.946 16:44:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:32.946 [2024-07-13 16:44:04.280125] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:32.946 [2024-07-13 16:44:04.280430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144581 ] 00:26:33.204 [2024-07-13 16:44:04.435986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.204 [2024-07-13 16:44:04.504865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.771  Copying: 512/512 [B] (average 500 kBps) 00:26:33.771 00:26:33.771 16:44:05 -- dd/posix.sh@93 -- # [[ hicc4f27kxveprqhzf733bhqxl52qnieijd410yelcg5ha9utc6ncwsel033ezhhxei2bvx0vmqcjrkyatz7x5zihwsodkylmglrcksbsvbeyfohs0z4kfgciq2ehypj16twzf6bh1o8ucme7hyya76aia4gbrisv32hiaqb1tqrvtpyhjl8zax83x8lg3kaxn93tsvnl4t41vl23l8g0rfqhhf3ldsx9lm3e3uvzg66rilulvdhcocq4d4j4tzjc4vepjsemr78r6om11rjrmx1vbjepng0qk8kzfhz42amrsmc1o86hl1w0gfwr278w6wqaqdwgktw07i9sxwrt40g0jtue94c6xrfh2ebt2ytwo3923frn14rlwcbjqa3usr75ctexemnxppqjblhy2cdczecmt0hq7888ysn9s2iinaph737nf62jhzo1ona2o7nqhe95mf031fvy8nl02kys389n1trswgvetdsvvg0k5tnll599yms0r2ujl0s == \h\i\c\c\4\f\2\7\k\x\v\e\p\r\q\h\z\f\7\3\3\b\h\q\x\l\5\2\q\n\i\e\i\j\d\4\1\0\y\e\l\c\g\5\h\a\9\u\t\c\6\n\c\w\s\e\l\0\3\3\e\z\h\h\x\e\i\2\b\v\x\0\v\m\q\c\j\r\k\y\a\t\z\7\x\5\z\i\h\w\s\o\d\k\y\l\m\g\l\r\c\k\s\b\s\v\b\e\y\f\o\h\s\0\z\4\k\f\g\c\i\q\2\e\h\y\p\j\1\6\t\w\z\f\6\b\h\1\o\8\u\c\m\e\7\h\y\y\a\7\6\a\i\a\4\g\b\r\i\s\v\3\2\h\i\a\q\b\1\t\q\r\v\t\p\y\h\j\l\8\z\a\x\8\3\x\8\l\g\3\k\a\x\n\9\3\t\s\v\n\l\4\t\4\1\v\l\2\3\l\8\g\0\r\f\q\h\h\f\3\l\d\s\x\9\l\m\3\e\3\u\v\z\g\6\6\r\i\l\u\l\v\d\h\c\o\c\q\4\d\4\j\4\t\z\j\c\4\v\e\p\j\s\e\m\r\7\8\r\6\o\m\1\1\r\j\r\m\x\1\v\b\j\e\p\n\g\0\q\k\8\k\z\f\h\z\4\2\a\m\r\s\m\c\1\o\8\6\h\l\1\w\0\g\f\w\r\2\7\8\w\6\w\q\a\q\d\w\g\k\t\w\0\7\i\9\s\x\w\r\t\4\0\g\0\j\t\u\e\9\4\c\6\x\r\f\h\2\e\b\t\2\y\t\w\o\3\9\2\3\f\r\n\1\4\r\l\w\c\b\j\q\a\3\u\s\r\7\5\c\t\e\x\e\m\n\x\p\p\q\j\b\l\h\y\2\c\d\c\z\e\c\m\t\0\h\q\7\8\8\8\y\s\n\9\s\2\i\i\n\a\p\h\7\3\7\n\f\6\2\j\h\z\o\1\o\n\a\2\o\7\n\q\h\e\9\5\m\f\0\3\1\f\v\y\8\n\l\0\2\k\y\s\3\8\9\n\1\t\r\s\w\g\v\e\t\d\s\v\v\g\0\k\5\t\n\l\l\5\9\9\y\m\s\0\r\2\u\j\l\0\s ]] 00:26:33.771 16:44:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:33.771 16:44:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:33.771 [2024-07-13 16:44:05.117970] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:33.771 [2024-07-13 16:44:05.118219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144599 ] 00:26:34.029 [2024-07-13 16:44:05.272584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.030 [2024-07-13 16:44:05.340516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.596  Copying: 512/512 [B] (average 500 kBps) 00:26:34.596 00:26:34.596 16:44:05 -- dd/posix.sh@93 -- # [[ hicc4f27kxveprqhzf733bhqxl52qnieijd410yelcg5ha9utc6ncwsel033ezhhxei2bvx0vmqcjrkyatz7x5zihwsodkylmglrcksbsvbeyfohs0z4kfgciq2ehypj16twzf6bh1o8ucme7hyya76aia4gbrisv32hiaqb1tqrvtpyhjl8zax83x8lg3kaxn93tsvnl4t41vl23l8g0rfqhhf3ldsx9lm3e3uvzg66rilulvdhcocq4d4j4tzjc4vepjsemr78r6om11rjrmx1vbjepng0qk8kzfhz42amrsmc1o86hl1w0gfwr278w6wqaqdwgktw07i9sxwrt40g0jtue94c6xrfh2ebt2ytwo3923frn14rlwcbjqa3usr75ctexemnxppqjblhy2cdczecmt0hq7888ysn9s2iinaph737nf62jhzo1ona2o7nqhe95mf031fvy8nl02kys389n1trswgvetdsvvg0k5tnll599yms0r2ujl0s == \h\i\c\c\4\f\2\7\k\x\v\e\p\r\q\h\z\f\7\3\3\b\h\q\x\l\5\2\q\n\i\e\i\j\d\4\1\0\y\e\l\c\g\5\h\a\9\u\t\c\6\n\c\w\s\e\l\0\3\3\e\z\h\h\x\e\i\2\b\v\x\0\v\m\q\c\j\r\k\y\a\t\z\7\x\5\z\i\h\w\s\o\d\k\y\l\m\g\l\r\c\k\s\b\s\v\b\e\y\f\o\h\s\0\z\4\k\f\g\c\i\q\2\e\h\y\p\j\1\6\t\w\z\f\6\b\h\1\o\8\u\c\m\e\7\h\y\y\a\7\6\a\i\a\4\g\b\r\i\s\v\3\2\h\i\a\q\b\1\t\q\r\v\t\p\y\h\j\l\8\z\a\x\8\3\x\8\l\g\3\k\a\x\n\9\3\t\s\v\n\l\4\t\4\1\v\l\2\3\l\8\g\0\r\f\q\h\h\f\3\l\d\s\x\9\l\m\3\e\3\u\v\z\g\6\6\r\i\l\u\l\v\d\h\c\o\c\q\4\d\4\j\4\t\z\j\c\4\v\e\p\j\s\e\m\r\7\8\r\6\o\m\1\1\r\j\r\m\x\1\v\b\j\e\p\n\g\0\q\k\8\k\z\f\h\z\4\2\a\m\r\s\m\c\1\o\8\6\h\l\1\w\0\g\f\w\r\2\7\8\w\6\w\q\a\q\d\w\g\k\t\w\0\7\i\9\s\x\w\r\t\4\0\g\0\j\t\u\e\9\4\c\6\x\r\f\h\2\e\b\t\2\y\t\w\o\3\9\2\3\f\r\n\1\4\r\l\w\c\b\j\q\a\3\u\s\r\7\5\c\t\e\x\e\m\n\x\p\p\q\j\b\l\h\y\2\c\d\c\z\e\c\m\t\0\h\q\7\8\8\8\y\s\n\9\s\2\i\i\n\a\p\h\7\3\7\n\f\6\2\j\h\z\o\1\o\n\a\2\o\7\n\q\h\e\9\5\m\f\0\3\1\f\v\y\8\n\l\0\2\k\y\s\3\8\9\n\1\t\r\s\w\g\v\e\t\d\s\v\v\g\0\k\5\t\n\l\l\5\9\9\y\m\s\0\r\2\u\j\l\0\s ]] 00:26:34.596 16:44:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:34.596 16:44:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:34.596 [2024-07-13 16:44:05.927007] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:34.596 [2024-07-13 16:44:05.927198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144612 ] 00:26:34.855 [2024-07-13 16:44:06.066771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.855 [2024-07-13 16:44:06.136947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.422  Copying: 512/512 [B] (average 166 kBps) 00:26:35.422 00:26:35.422 16:44:06 -- dd/posix.sh@93 -- # [[ hicc4f27kxveprqhzf733bhqxl52qnieijd410yelcg5ha9utc6ncwsel033ezhhxei2bvx0vmqcjrkyatz7x5zihwsodkylmglrcksbsvbeyfohs0z4kfgciq2ehypj16twzf6bh1o8ucme7hyya76aia4gbrisv32hiaqb1tqrvtpyhjl8zax83x8lg3kaxn93tsvnl4t41vl23l8g0rfqhhf3ldsx9lm3e3uvzg66rilulvdhcocq4d4j4tzjc4vepjsemr78r6om11rjrmx1vbjepng0qk8kzfhz42amrsmc1o86hl1w0gfwr278w6wqaqdwgktw07i9sxwrt40g0jtue94c6xrfh2ebt2ytwo3923frn14rlwcbjqa3usr75ctexemnxppqjblhy2cdczecmt0hq7888ysn9s2iinaph737nf62jhzo1ona2o7nqhe95mf031fvy8nl02kys389n1trswgvetdsvvg0k5tnll599yms0r2ujl0s == \h\i\c\c\4\f\2\7\k\x\v\e\p\r\q\h\z\f\7\3\3\b\h\q\x\l\5\2\q\n\i\e\i\j\d\4\1\0\y\e\l\c\g\5\h\a\9\u\t\c\6\n\c\w\s\e\l\0\3\3\e\z\h\h\x\e\i\2\b\v\x\0\v\m\q\c\j\r\k\y\a\t\z\7\x\5\z\i\h\w\s\o\d\k\y\l\m\g\l\r\c\k\s\b\s\v\b\e\y\f\o\h\s\0\z\4\k\f\g\c\i\q\2\e\h\y\p\j\1\6\t\w\z\f\6\b\h\1\o\8\u\c\m\e\7\h\y\y\a\7\6\a\i\a\4\g\b\r\i\s\v\3\2\h\i\a\q\b\1\t\q\r\v\t\p\y\h\j\l\8\z\a\x\8\3\x\8\l\g\3\k\a\x\n\9\3\t\s\v\n\l\4\t\4\1\v\l\2\3\l\8\g\0\r\f\q\h\h\f\3\l\d\s\x\9\l\m\3\e\3\u\v\z\g\6\6\r\i\l\u\l\v\d\h\c\o\c\q\4\d\4\j\4\t\z\j\c\4\v\e\p\j\s\e\m\r\7\8\r\6\o\m\1\1\r\j\r\m\x\1\v\b\j\e\p\n\g\0\q\k\8\k\z\f\h\z\4\2\a\m\r\s\m\c\1\o\8\6\h\l\1\w\0\g\f\w\r\2\7\8\w\6\w\q\a\q\d\w\g\k\t\w\0\7\i\9\s\x\w\r\t\4\0\g\0\j\t\u\e\9\4\c\6\x\r\f\h\2\e\b\t\2\y\t\w\o\3\9\2\3\f\r\n\1\4\r\l\w\c\b\j\q\a\3\u\s\r\7\5\c\t\e\x\e\m\n\x\p\p\q\j\b\l\h\y\2\c\d\c\z\e\c\m\t\0\h\q\7\8\8\8\y\s\n\9\s\2\i\i\n\a\p\h\7\3\7\n\f\6\2\j\h\z\o\1\o\n\a\2\o\7\n\q\h\e\9\5\m\f\0\3\1\f\v\y\8\n\l\0\2\k\y\s\3\8\9\n\1\t\r\s\w\g\v\e\t\d\s\v\v\g\0\k\5\t\n\l\l\5\9\9\y\m\s\0\r\2\u\j\l\0\s ]] 00:26:35.422 16:44:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:35.422 16:44:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:35.422 [2024-07-13 16:44:06.747310] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:35.422 [2024-07-13 16:44:06.747608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144622 ] 00:26:35.682 [2024-07-13 16:44:06.898783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.682 [2024-07-13 16:44:06.967662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.250  Copying: 512/512 [B] (average 250 kBps) 00:26:36.250 00:26:36.250 16:44:07 -- dd/posix.sh@93 -- # [[ hicc4f27kxveprqhzf733bhqxl52qnieijd410yelcg5ha9utc6ncwsel033ezhhxei2bvx0vmqcjrkyatz7x5zihwsodkylmglrcksbsvbeyfohs0z4kfgciq2ehypj16twzf6bh1o8ucme7hyya76aia4gbrisv32hiaqb1tqrvtpyhjl8zax83x8lg3kaxn93tsvnl4t41vl23l8g0rfqhhf3ldsx9lm3e3uvzg66rilulvdhcocq4d4j4tzjc4vepjsemr78r6om11rjrmx1vbjepng0qk8kzfhz42amrsmc1o86hl1w0gfwr278w6wqaqdwgktw07i9sxwrt40g0jtue94c6xrfh2ebt2ytwo3923frn14rlwcbjqa3usr75ctexemnxppqjblhy2cdczecmt0hq7888ysn9s2iinaph737nf62jhzo1ona2o7nqhe95mf031fvy8nl02kys389n1trswgvetdsvvg0k5tnll599yms0r2ujl0s == \h\i\c\c\4\f\2\7\k\x\v\e\p\r\q\h\z\f\7\3\3\b\h\q\x\l\5\2\q\n\i\e\i\j\d\4\1\0\y\e\l\c\g\5\h\a\9\u\t\c\6\n\c\w\s\e\l\0\3\3\e\z\h\h\x\e\i\2\b\v\x\0\v\m\q\c\j\r\k\y\a\t\z\7\x\5\z\i\h\w\s\o\d\k\y\l\m\g\l\r\c\k\s\b\s\v\b\e\y\f\o\h\s\0\z\4\k\f\g\c\i\q\2\e\h\y\p\j\1\6\t\w\z\f\6\b\h\1\o\8\u\c\m\e\7\h\y\y\a\7\6\a\i\a\4\g\b\r\i\s\v\3\2\h\i\a\q\b\1\t\q\r\v\t\p\y\h\j\l\8\z\a\x\8\3\x\8\l\g\3\k\a\x\n\9\3\t\s\v\n\l\4\t\4\1\v\l\2\3\l\8\g\0\r\f\q\h\h\f\3\l\d\s\x\9\l\m\3\e\3\u\v\z\g\6\6\r\i\l\u\l\v\d\h\c\o\c\q\4\d\4\j\4\t\z\j\c\4\v\e\p\j\s\e\m\r\7\8\r\6\o\m\1\1\r\j\r\m\x\1\v\b\j\e\p\n\g\0\q\k\8\k\z\f\h\z\4\2\a\m\r\s\m\c\1\o\8\6\h\l\1\w\0\g\f\w\r\2\7\8\w\6\w\q\a\q\d\w\g\k\t\w\0\7\i\9\s\x\w\r\t\4\0\g\0\j\t\u\e\9\4\c\6\x\r\f\h\2\e\b\t\2\y\t\w\o\3\9\2\3\f\r\n\1\4\r\l\w\c\b\j\q\a\3\u\s\r\7\5\c\t\e\x\e\m\n\x\p\p\q\j\b\l\h\y\2\c\d\c\z\e\c\m\t\0\h\q\7\8\8\8\y\s\n\9\s\2\i\i\n\a\p\h\7\3\7\n\f\6\2\j\h\z\o\1\o\n\a\2\o\7\n\q\h\e\9\5\m\f\0\3\1\f\v\y\8\n\l\0\2\k\y\s\3\8\9\n\1\t\r\s\w\g\v\e\t\d\s\v\v\g\0\k\5\t\n\l\l\5\9\9\y\m\s\0\r\2\u\j\l\0\s ]] 00:26:36.250 00:26:36.250 real 0m6.659s 00:26:36.250 user 0m3.508s 00:26:36.250 sys 0m2.044s 00:26:36.250 ************************************ 00:26:36.250 END TEST dd_flags_misc 00:26:36.250 ************************************ 00:26:36.250 16:44:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:36.250 16:44:07 -- common/autotest_common.sh@10 -- # set +x 00:26:36.250 16:44:07 -- dd/posix.sh@131 -- # tests_forced_aio 00:26:36.250 16:44:07 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:26:36.250 * Second test run, using AIO 00:26:36.250 16:44:07 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:26:36.250 16:44:07 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:26:36.250 16:44:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:36.250 16:44:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:36.250 16:44:07 -- common/autotest_common.sh@10 -- # set +x 00:26:36.250 ************************************ 00:26:36.250 START TEST dd_flag_append_forced_aio 00:26:36.250 ************************************ 00:26:36.250 16:44:07 -- common/autotest_common.sh@1104 -- # append 00:26:36.250 16:44:07 -- dd/posix.sh@16 -- # local dump0 00:26:36.250 16:44:07 -- dd/posix.sh@17 -- # local dump1 00:26:36.250 16:44:07 -- dd/posix.sh@19 -- # gen_bytes 32 00:26:36.250 16:44:07 -- dd/common.sh@98 -- # xtrace_disable 00:26:36.250 16:44:07 -- common/autotest_common.sh@10 -- # set +x 00:26:36.251 16:44:07 -- dd/posix.sh@19 -- # dump0=6ccu9ruzye6okrc8ctkyw0t7i8ogztgo 00:26:36.251 16:44:07 -- dd/posix.sh@20 -- # gen_bytes 32 00:26:36.251 16:44:07 -- dd/common.sh@98 -- # xtrace_disable 00:26:36.251 16:44:07 -- common/autotest_common.sh@10 -- # set +x 00:26:36.251 16:44:07 -- dd/posix.sh@20 -- # dump1=9kbgfbgoai1fsam1ize3wrdsnsdv37x9 00:26:36.251 16:44:07 -- dd/posix.sh@22 -- # printf %s 6ccu9ruzye6okrc8ctkyw0t7i8ogztgo 00:26:36.251 16:44:07 -- dd/posix.sh@23 -- # printf %s 9kbgfbgoai1fsam1ize3wrdsnsdv37x9 00:26:36.251 16:44:07 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:26:36.251 [2024-07-13 16:44:07.655036] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:36.251 [2024-07-13 16:44:07.655312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144661 ] 00:26:36.509 [2024-07-13 16:44:07.811299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.509 [2024-07-13 16:44:07.879727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.029  Copying: 32/32 [B] (average 31 kBps) 00:26:37.029 00:26:37.029 16:44:08 -- dd/posix.sh@27 -- # [[ 9kbgfbgoai1fsam1ize3wrdsnsdv37x96ccu9ruzye6okrc8ctkyw0t7i8ogztgo == \9\k\b\g\f\b\g\o\a\i\1\f\s\a\m\1\i\z\e\3\w\r\d\s\n\s\d\v\3\7\x\9\6\c\c\u\9\r\u\z\y\e\6\o\k\r\c\8\c\t\k\y\w\0\t\7\i\8\o\g\z\t\g\o ]] 00:26:37.029 00:26:37.029 real 0m0.853s 00:26:37.029 user 0m0.423s 00:26:37.029 sys 0m0.284s 00:26:37.029 ************************************ 00:26:37.029 END TEST dd_flag_append_forced_aio 00:26:37.029 ************************************ 00:26:37.029 16:44:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.029 16:44:08 -- common/autotest_common.sh@10 -- # set +x 00:26:37.029 16:44:08 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:26:37.029 16:44:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:37.029 16:44:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:37.029 16:44:08 -- common/autotest_common.sh@10 -- # set +x 00:26:37.289 ************************************ 00:26:37.289 START TEST dd_flag_directory_forced_aio 00:26:37.289 ************************************ 00:26:37.289 16:44:08 -- common/autotest_common.sh@1104 -- # directory 00:26:37.289 16:44:08 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:37.289 16:44:08 -- common/autotest_common.sh@640 -- # local es=0 00:26:37.289 16:44:08 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:37.289 16:44:08 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:37.289 16:44:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:37.289 16:44:08 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:37.289 16:44:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:37.289 16:44:08 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:37.289 16:44:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:37.289 16:44:08 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:37.289 16:44:08 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:37.289 16:44:08 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:37.289 [2024-07-13 16:44:08.577293] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:37.289 [2024-07-13 16:44:08.577567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144696 ] 00:26:37.289 [2024-07-13 16:44:08.732356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.548 [2024-07-13 16:44:08.804118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.548 [2024-07-13 16:44:08.921396] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:37.548 [2024-07-13 16:44:08.921491] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:37.548 [2024-07-13 16:44:08.921523] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:37.807 [2024-07-13 16:44:09.105654] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:38.066 16:44:09 -- common/autotest_common.sh@643 -- # es=236 00:26:38.066 16:44:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:38.066 16:44:09 -- common/autotest_common.sh@652 -- # es=108 00:26:38.066 16:44:09 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:38.066 16:44:09 -- common/autotest_common.sh@660 -- # es=1 00:26:38.066 16:44:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:38.066 16:44:09 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:38.066 16:44:09 -- common/autotest_common.sh@640 -- # local es=0 00:26:38.066 16:44:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:38.066 16:44:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.066 16:44:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.066 16:44:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.066 16:44:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.066 16:44:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.066 16:44:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.066 16:44:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.066 16:44:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:38.066 16:44:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:38.066 [2024-07-13 16:44:09.387844] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:38.066 [2024-07-13 16:44:09.388102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144717 ] 00:26:38.337 [2024-07-13 16:44:09.544048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.337 [2024-07-13 16:44:09.612483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.337 [2024-07-13 16:44:09.726677] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:38.337 [2024-07-13 16:44:09.726772] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:38.337 [2024-07-13 16:44:09.726811] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:38.595 [2024-07-13 16:44:09.906947] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:38.853 16:44:10 -- common/autotest_common.sh@643 -- # es=236 00:26:38.853 16:44:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:38.853 16:44:10 -- common/autotest_common.sh@652 -- # es=108 00:26:38.853 16:44:10 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:38.853 16:44:10 -- common/autotest_common.sh@660 -- # es=1 00:26:38.853 16:44:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:38.853 00:26:38.853 real 0m1.610s 00:26:38.853 user 0m0.900s 00:26:38.853 sys 0m0.511s 00:26:38.853 16:44:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.853 16:44:10 -- common/autotest_common.sh@10 -- # set +x 00:26:38.853 ************************************ 00:26:38.853 END TEST dd_flag_directory_forced_aio 00:26:38.853 ************************************ 00:26:38.853 16:44:10 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:26:38.853 16:44:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:38.853 16:44:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:38.853 16:44:10 -- common/autotest_common.sh@10 -- # set +x 00:26:38.853 ************************************ 00:26:38.853 START TEST dd_flag_nofollow_forced_aio 00:26:38.854 ************************************ 00:26:38.854 16:44:10 -- common/autotest_common.sh@1104 -- # nofollow 00:26:38.854 16:44:10 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:38.854 16:44:10 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:38.854 16:44:10 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:38.854 16:44:10 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:38.854 16:44:10 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:38.854 16:44:10 -- common/autotest_common.sh@640 -- # local es=0 00:26:38.854 16:44:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:38.854 16:44:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.854 16:44:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.854 16:44:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.854 16:44:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.854 16:44:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.854 16:44:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:38.854 16:44:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.854 16:44:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:38.854 16:44:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:38.854 [2024-07-13 16:44:10.268605] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:38.854 [2024-07-13 16:44:10.268851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144756 ] 00:26:39.112 [2024-07-13 16:44:10.423933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.112 [2024-07-13 16:44:10.495377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.369 [2024-07-13 16:44:10.610861] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:39.369 [2024-07-13 16:44:10.610954] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:39.369 [2024-07-13 16:44:10.610991] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:39.369 [2024-07-13 16:44:10.791455] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:39.627 16:44:10 -- common/autotest_common.sh@643 -- # es=216 00:26:39.627 16:44:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:39.627 16:44:10 -- common/autotest_common.sh@652 -- # es=88 00:26:39.627 16:44:10 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:39.627 16:44:10 -- common/autotest_common.sh@660 -- # es=1 00:26:39.627 16:44:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:39.627 16:44:10 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:39.627 16:44:10 -- common/autotest_common.sh@640 -- # local es=0 00:26:39.627 16:44:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:39.627 16:44:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:39.627 16:44:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:39.627 16:44:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:39.627 16:44:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:39.627 16:44:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:39.627 16:44:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:39.627 16:44:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:39.627 16:44:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:39.627 16:44:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:39.627 [2024-07-13 16:44:11.053609] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:39.627 [2024-07-13 16:44:11.053784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144770 ] 00:26:39.886 [2024-07-13 16:44:11.196610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.886 [2024-07-13 16:44:11.262944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.144 [2024-07-13 16:44:11.379599] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:40.144 [2024-07-13 16:44:11.379695] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:40.144 [2024-07-13 16:44:11.379746] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:40.144 [2024-07-13 16:44:11.561650] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:40.403 16:44:11 -- common/autotest_common.sh@643 -- # es=216 00:26:40.403 16:44:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:40.403 16:44:11 -- common/autotest_common.sh@652 -- # es=88 00:26:40.403 16:44:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:40.403 16:44:11 -- common/autotest_common.sh@660 -- # es=1 00:26:40.403 16:44:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:40.403 16:44:11 -- dd/posix.sh@46 -- # gen_bytes 512 00:26:40.403 16:44:11 -- dd/common.sh@98 -- # xtrace_disable 00:26:40.403 16:44:11 -- common/autotest_common.sh@10 -- # set +x 00:26:40.403 16:44:11 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:40.403 [2024-07-13 16:44:11.847857] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:40.403 [2024-07-13 16:44:11.848109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144779 ] 00:26:40.661 [2024-07-13 16:44:12.002909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.661 [2024-07-13 16:44:12.072556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.177  Copying: 512/512 [B] (average 500 kBps) 00:26:41.177 00:26:41.177 16:44:12 -- dd/posix.sh@49 -- # [[ ha962rnlfqf2ru0swve08vybn99q85qsg1k637ifbel6y089k20ru4v283hgaaslaffexzkzp3kdjuiheyh7f652lxk99fbzpyvvq4iu9om3mopn8q72h13306b0uspkdwo5mpjiajtxaqie73962m4j2ho6pkl808z94m7wxjfpd0wuh874xhrmrhjfbst9n0x98mdv2k6g97cfkhhjwzwkezttf7ues086ktbu90zs6g77gh11zcphpoyuakvxaxqw5rvafmynnjf2cmqvgpgw6wx7zdv32ha3loztab5hopuzsn11smt5ih33u56ykc8dz6n2hgj5xmv8eaf86txjfxf5il9tz7q1h3qybtuslk5l8il8w2qogypn0tv6svhknp43a5uj7ocll5ab0926pski01hp9yeepci58wp0yo3s44z97h0bip81eubny92i6a2o6haek2vs3yut9uiwrxasl33vynj652kr4dtt4sngdfo40weicmadszhr == \h\a\9\6\2\r\n\l\f\q\f\2\r\u\0\s\w\v\e\0\8\v\y\b\n\9\9\q\8\5\q\s\g\1\k\6\3\7\i\f\b\e\l\6\y\0\8\9\k\2\0\r\u\4\v\2\8\3\h\g\a\a\s\l\a\f\f\e\x\z\k\z\p\3\k\d\j\u\i\h\e\y\h\7\f\6\5\2\l\x\k\9\9\f\b\z\p\y\v\v\q\4\i\u\9\o\m\3\m\o\p\n\8\q\7\2\h\1\3\3\0\6\b\0\u\s\p\k\d\w\o\5\m\p\j\i\a\j\t\x\a\q\i\e\7\3\9\6\2\m\4\j\2\h\o\6\p\k\l\8\0\8\z\9\4\m\7\w\x\j\f\p\d\0\w\u\h\8\7\4\x\h\r\m\r\h\j\f\b\s\t\9\n\0\x\9\8\m\d\v\2\k\6\g\9\7\c\f\k\h\h\j\w\z\w\k\e\z\t\t\f\7\u\e\s\0\8\6\k\t\b\u\9\0\z\s\6\g\7\7\g\h\1\1\z\c\p\h\p\o\y\u\a\k\v\x\a\x\q\w\5\r\v\a\f\m\y\n\n\j\f\2\c\m\q\v\g\p\g\w\6\w\x\7\z\d\v\3\2\h\a\3\l\o\z\t\a\b\5\h\o\p\u\z\s\n\1\1\s\m\t\5\i\h\3\3\u\5\6\y\k\c\8\d\z\6\n\2\h\g\j\5\x\m\v\8\e\a\f\8\6\t\x\j\f\x\f\5\i\l\9\t\z\7\q\1\h\3\q\y\b\t\u\s\l\k\5\l\8\i\l\8\w\2\q\o\g\y\p\n\0\t\v\6\s\v\h\k\n\p\4\3\a\5\u\j\7\o\c\l\l\5\a\b\0\9\2\6\p\s\k\i\0\1\h\p\9\y\e\e\p\c\i\5\8\w\p\0\y\o\3\s\4\4\z\9\7\h\0\b\i\p\8\1\e\u\b\n\y\9\2\i\6\a\2\o\6\h\a\e\k\2\v\s\3\y\u\t\9\u\i\w\r\x\a\s\l\3\3\v\y\n\j\6\5\2\k\r\4\d\t\t\4\s\n\g\d\f\o\4\0\w\e\i\c\m\a\d\s\z\h\r ]] 00:26:41.177 00:26:41.177 real 0m2.424s 00:26:41.177 user 0m1.259s 00:26:41.177 sys 0m0.833s 00:26:41.177 ************************************ 00:26:41.177 END TEST dd_flag_nofollow_forced_aio 00:26:41.177 ************************************ 00:26:41.177 16:44:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:41.177 16:44:12 -- common/autotest_common.sh@10 -- # set +x 00:26:41.436 16:44:12 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:26:41.436 16:44:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:41.436 16:44:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:41.436 16:44:12 -- common/autotest_common.sh@10 -- # set +x 00:26:41.436 ************************************ 00:26:41.436 START TEST dd_flag_noatime_forced_aio 00:26:41.436 ************************************ 00:26:41.436 16:44:12 -- common/autotest_common.sh@1104 -- # noatime 00:26:41.436 16:44:12 -- dd/posix.sh@53 -- # local atime_if 00:26:41.436 16:44:12 -- dd/posix.sh@54 -- # local atime_of 00:26:41.436 16:44:12 -- dd/posix.sh@58 -- # gen_bytes 512 00:26:41.436 16:44:12 -- dd/common.sh@98 -- # xtrace_disable 00:26:41.436 16:44:12 -- common/autotest_common.sh@10 -- # set +x 00:26:41.436 16:44:12 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:41.436 16:44:12 -- dd/posix.sh@60 -- # atime_if=1720889052 00:26:41.436 16:44:12 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:41.436 16:44:12 -- dd/posix.sh@61 -- # atime_of=1720889052 00:26:41.436 16:44:12 -- dd/posix.sh@66 -- # sleep 1 00:26:42.368 16:44:13 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:42.368 [2024-07-13 16:44:13.777135] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:42.368 [2024-07-13 16:44:13.777402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144831 ] 00:26:42.626 [2024-07-13 16:44:13.937487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.626 [2024-07-13 16:44:14.015645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.141  Copying: 512/512 [B] (average 500 kBps) 00:26:43.142 00:26:43.142 16:44:14 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:43.142 16:44:14 -- dd/posix.sh@69 -- # (( atime_if == 1720889052 )) 00:26:43.142 16:44:14 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:43.142 16:44:14 -- dd/posix.sh@70 -- # (( atime_of == 1720889052 )) 00:26:43.142 16:44:14 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:43.400 [2024-07-13 16:44:14.645911] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:43.400 [2024-07-13 16:44:14.646135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144850 ] 00:26:43.400 [2024-07-13 16:44:14.800081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.400 [2024-07-13 16:44:14.867069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.229  Copying: 512/512 [B] (average 500 kBps) 00:26:44.229 00:26:44.229 16:44:15 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:44.229 16:44:15 -- dd/posix.sh@73 -- # (( atime_if < 1720889054 )) 00:26:44.229 00:26:44.229 real 0m2.737s 00:26:44.229 user 0m0.902s 00:26:44.229 sys 0m0.559s 00:26:44.229 ************************************ 00:26:44.229 END TEST dd_flag_noatime_forced_aio 00:26:44.229 ************************************ 00:26:44.229 16:44:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:44.229 16:44:15 -- common/autotest_common.sh@10 -- # set +x 00:26:44.229 16:44:15 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:26:44.229 16:44:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:44.229 16:44:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:44.229 16:44:15 -- common/autotest_common.sh@10 -- # set +x 00:26:44.229 ************************************ 00:26:44.229 START TEST dd_flags_misc_forced_aio 00:26:44.229 ************************************ 00:26:44.229 16:44:15 -- common/autotest_common.sh@1104 -- # io 00:26:44.229 16:44:15 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:26:44.229 16:44:15 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:26:44.229 16:44:15 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:26:44.229 16:44:15 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:44.229 16:44:15 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:44.229 16:44:15 -- dd/common.sh@98 -- # xtrace_disable 00:26:44.229 16:44:15 -- common/autotest_common.sh@10 -- # set +x 00:26:44.229 16:44:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:44.230 16:44:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:44.230 [2024-07-13 16:44:15.558568] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:44.230 [2024-07-13 16:44:15.558822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144888 ] 00:26:44.503 [2024-07-13 16:44:15.715239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.503 [2024-07-13 16:44:15.782379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.106  Copying: 512/512 [B] (average 500 kBps) 00:26:45.106 00:26:45.106 16:44:16 -- dd/posix.sh@93 -- # [[ 2fnruvzojqo5lqmaoy9orksa49kowg0pj9f1740d9hj66zu4stsvskfrl3nlxw46sa2qe7i9r8z5tu63dpe2yqok9dxfkx8f4u24stmm7vohkf8vyvtg1qgjcl2slvh6xytud9zil2wdkb6wouot3b3pjqv4bbtmkrkxg8s8rnqfc29zczoqzpwjmyyv4g8agp0ry4w4aiduux6rkgyjpv0vvwngk9ppnwf6n8ld2si5spo6tggmnrs9jb1btum0gc3jyxbea2hbpqm1suu1tfy7jynrtjvuozevbr8elvvj9nrmhosrdsi3k76afcxlx99i0pmluu47umn4n95cn3og84u4iju4fnwuqatqkl6srfpw499p5nsup9byu39qcy1xfg5bcojtdmw4p42lzgii3msjzcdbj52crpjfykdjdjo6b1ut0ggcdw88eszeppki4kdu83gks80pkaa5g5evfi0uqognhdu2m4p0y2bwpzy4b4gsbzhxa748vcod == \2\f\n\r\u\v\z\o\j\q\o\5\l\q\m\a\o\y\9\o\r\k\s\a\4\9\k\o\w\g\0\p\j\9\f\1\7\4\0\d\9\h\j\6\6\z\u\4\s\t\s\v\s\k\f\r\l\3\n\l\x\w\4\6\s\a\2\q\e\7\i\9\r\8\z\5\t\u\6\3\d\p\e\2\y\q\o\k\9\d\x\f\k\x\8\f\4\u\2\4\s\t\m\m\7\v\o\h\k\f\8\v\y\v\t\g\1\q\g\j\c\l\2\s\l\v\h\6\x\y\t\u\d\9\z\i\l\2\w\d\k\b\6\w\o\u\o\t\3\b\3\p\j\q\v\4\b\b\t\m\k\r\k\x\g\8\s\8\r\n\q\f\c\2\9\z\c\z\o\q\z\p\w\j\m\y\y\v\4\g\8\a\g\p\0\r\y\4\w\4\a\i\d\u\u\x\6\r\k\g\y\j\p\v\0\v\v\w\n\g\k\9\p\p\n\w\f\6\n\8\l\d\2\s\i\5\s\p\o\6\t\g\g\m\n\r\s\9\j\b\1\b\t\u\m\0\g\c\3\j\y\x\b\e\a\2\h\b\p\q\m\1\s\u\u\1\t\f\y\7\j\y\n\r\t\j\v\u\o\z\e\v\b\r\8\e\l\v\v\j\9\n\r\m\h\o\s\r\d\s\i\3\k\7\6\a\f\c\x\l\x\9\9\i\0\p\m\l\u\u\4\7\u\m\n\4\n\9\5\c\n\3\o\g\8\4\u\4\i\j\u\4\f\n\w\u\q\a\t\q\k\l\6\s\r\f\p\w\4\9\9\p\5\n\s\u\p\9\b\y\u\3\9\q\c\y\1\x\f\g\5\b\c\o\j\t\d\m\w\4\p\4\2\l\z\g\i\i\3\m\s\j\z\c\d\b\j\5\2\c\r\p\j\f\y\k\d\j\d\j\o\6\b\1\u\t\0\g\g\c\d\w\8\8\e\s\z\e\p\p\k\i\4\k\d\u\8\3\g\k\s\8\0\p\k\a\a\5\g\5\e\v\f\i\0\u\q\o\g\n\h\d\u\2\m\4\p\0\y\2\b\w\p\z\y\4\b\4\g\s\b\z\h\x\a\7\4\8\v\c\o\d ]] 00:26:45.106 16:44:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:45.106 16:44:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:45.106 [2024-07-13 16:44:16.399583] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:45.106 [2024-07-13 16:44:16.399866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144908 ] 00:26:45.106 [2024-07-13 16:44:16.555627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.364 [2024-07-13 16:44:16.631311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.932  Copying: 512/512 [B] (average 500 kBps) 00:26:45.932 00:26:45.932 16:44:17 -- dd/posix.sh@93 -- # [[ 2fnruvzojqo5lqmaoy9orksa49kowg0pj9f1740d9hj66zu4stsvskfrl3nlxw46sa2qe7i9r8z5tu63dpe2yqok9dxfkx8f4u24stmm7vohkf8vyvtg1qgjcl2slvh6xytud9zil2wdkb6wouot3b3pjqv4bbtmkrkxg8s8rnqfc29zczoqzpwjmyyv4g8agp0ry4w4aiduux6rkgyjpv0vvwngk9ppnwf6n8ld2si5spo6tggmnrs9jb1btum0gc3jyxbea2hbpqm1suu1tfy7jynrtjvuozevbr8elvvj9nrmhosrdsi3k76afcxlx99i0pmluu47umn4n95cn3og84u4iju4fnwuqatqkl6srfpw499p5nsup9byu39qcy1xfg5bcojtdmw4p42lzgii3msjzcdbj52crpjfykdjdjo6b1ut0ggcdw88eszeppki4kdu83gks80pkaa5g5evfi0uqognhdu2m4p0y2bwpzy4b4gsbzhxa748vcod == \2\f\n\r\u\v\z\o\j\q\o\5\l\q\m\a\o\y\9\o\r\k\s\a\4\9\k\o\w\g\0\p\j\9\f\1\7\4\0\d\9\h\j\6\6\z\u\4\s\t\s\v\s\k\f\r\l\3\n\l\x\w\4\6\s\a\2\q\e\7\i\9\r\8\z\5\t\u\6\3\d\p\e\2\y\q\o\k\9\d\x\f\k\x\8\f\4\u\2\4\s\t\m\m\7\v\o\h\k\f\8\v\y\v\t\g\1\q\g\j\c\l\2\s\l\v\h\6\x\y\t\u\d\9\z\i\l\2\w\d\k\b\6\w\o\u\o\t\3\b\3\p\j\q\v\4\b\b\t\m\k\r\k\x\g\8\s\8\r\n\q\f\c\2\9\z\c\z\o\q\z\p\w\j\m\y\y\v\4\g\8\a\g\p\0\r\y\4\w\4\a\i\d\u\u\x\6\r\k\g\y\j\p\v\0\v\v\w\n\g\k\9\p\p\n\w\f\6\n\8\l\d\2\s\i\5\s\p\o\6\t\g\g\m\n\r\s\9\j\b\1\b\t\u\m\0\g\c\3\j\y\x\b\e\a\2\h\b\p\q\m\1\s\u\u\1\t\f\y\7\j\y\n\r\t\j\v\u\o\z\e\v\b\r\8\e\l\v\v\j\9\n\r\m\h\o\s\r\d\s\i\3\k\7\6\a\f\c\x\l\x\9\9\i\0\p\m\l\u\u\4\7\u\m\n\4\n\9\5\c\n\3\o\g\8\4\u\4\i\j\u\4\f\n\w\u\q\a\t\q\k\l\6\s\r\f\p\w\4\9\9\p\5\n\s\u\p\9\b\y\u\3\9\q\c\y\1\x\f\g\5\b\c\o\j\t\d\m\w\4\p\4\2\l\z\g\i\i\3\m\s\j\z\c\d\b\j\5\2\c\r\p\j\f\y\k\d\j\d\j\o\6\b\1\u\t\0\g\g\c\d\w\8\8\e\s\z\e\p\p\k\i\4\k\d\u\8\3\g\k\s\8\0\p\k\a\a\5\g\5\e\v\f\i\0\u\q\o\g\n\h\d\u\2\m\4\p\0\y\2\b\w\p\z\y\4\b\4\g\s\b\z\h\x\a\7\4\8\v\c\o\d ]] 00:26:45.932 16:44:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:45.932 16:44:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:45.932 [2024-07-13 16:44:17.242895] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:45.932 [2024-07-13 16:44:17.243184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144920 ] 00:26:45.932 [2024-07-13 16:44:17.398620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.191 [2024-07-13 16:44:17.476838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.759  Copying: 512/512 [B] (average 250 kBps) 00:26:46.759 00:26:46.759 16:44:18 -- dd/posix.sh@93 -- # [[ 2fnruvzojqo5lqmaoy9orksa49kowg0pj9f1740d9hj66zu4stsvskfrl3nlxw46sa2qe7i9r8z5tu63dpe2yqok9dxfkx8f4u24stmm7vohkf8vyvtg1qgjcl2slvh6xytud9zil2wdkb6wouot3b3pjqv4bbtmkrkxg8s8rnqfc29zczoqzpwjmyyv4g8agp0ry4w4aiduux6rkgyjpv0vvwngk9ppnwf6n8ld2si5spo6tggmnrs9jb1btum0gc3jyxbea2hbpqm1suu1tfy7jynrtjvuozevbr8elvvj9nrmhosrdsi3k76afcxlx99i0pmluu47umn4n95cn3og84u4iju4fnwuqatqkl6srfpw499p5nsup9byu39qcy1xfg5bcojtdmw4p42lzgii3msjzcdbj52crpjfykdjdjo6b1ut0ggcdw88eszeppki4kdu83gks80pkaa5g5evfi0uqognhdu2m4p0y2bwpzy4b4gsbzhxa748vcod == \2\f\n\r\u\v\z\o\j\q\o\5\l\q\m\a\o\y\9\o\r\k\s\a\4\9\k\o\w\g\0\p\j\9\f\1\7\4\0\d\9\h\j\6\6\z\u\4\s\t\s\v\s\k\f\r\l\3\n\l\x\w\4\6\s\a\2\q\e\7\i\9\r\8\z\5\t\u\6\3\d\p\e\2\y\q\o\k\9\d\x\f\k\x\8\f\4\u\2\4\s\t\m\m\7\v\o\h\k\f\8\v\y\v\t\g\1\q\g\j\c\l\2\s\l\v\h\6\x\y\t\u\d\9\z\i\l\2\w\d\k\b\6\w\o\u\o\t\3\b\3\p\j\q\v\4\b\b\t\m\k\r\k\x\g\8\s\8\r\n\q\f\c\2\9\z\c\z\o\q\z\p\w\j\m\y\y\v\4\g\8\a\g\p\0\r\y\4\w\4\a\i\d\u\u\x\6\r\k\g\y\j\p\v\0\v\v\w\n\g\k\9\p\p\n\w\f\6\n\8\l\d\2\s\i\5\s\p\o\6\t\g\g\m\n\r\s\9\j\b\1\b\t\u\m\0\g\c\3\j\y\x\b\e\a\2\h\b\p\q\m\1\s\u\u\1\t\f\y\7\j\y\n\r\t\j\v\u\o\z\e\v\b\r\8\e\l\v\v\j\9\n\r\m\h\o\s\r\d\s\i\3\k\7\6\a\f\c\x\l\x\9\9\i\0\p\m\l\u\u\4\7\u\m\n\4\n\9\5\c\n\3\o\g\8\4\u\4\i\j\u\4\f\n\w\u\q\a\t\q\k\l\6\s\r\f\p\w\4\9\9\p\5\n\s\u\p\9\b\y\u\3\9\q\c\y\1\x\f\g\5\b\c\o\j\t\d\m\w\4\p\4\2\l\z\g\i\i\3\m\s\j\z\c\d\b\j\5\2\c\r\p\j\f\y\k\d\j\d\j\o\6\b\1\u\t\0\g\g\c\d\w\8\8\e\s\z\e\p\p\k\i\4\k\d\u\8\3\g\k\s\8\0\p\k\a\a\5\g\5\e\v\f\i\0\u\q\o\g\n\h\d\u\2\m\4\p\0\y\2\b\w\p\z\y\4\b\4\g\s\b\z\h\x\a\7\4\8\v\c\o\d ]] 00:26:46.759 16:44:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:46.759 16:44:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:46.759 [2024-07-13 16:44:18.121401] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:46.759 [2024-07-13 16:44:18.121670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144931 ] 00:26:47.019 [2024-07-13 16:44:18.279529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.019 [2024-07-13 16:44:18.363982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.587  Copying: 512/512 [B] (average 166 kBps) 00:26:47.587 00:26:47.587 16:44:18 -- dd/posix.sh@93 -- # [[ 2fnruvzojqo5lqmaoy9orksa49kowg0pj9f1740d9hj66zu4stsvskfrl3nlxw46sa2qe7i9r8z5tu63dpe2yqok9dxfkx8f4u24stmm7vohkf8vyvtg1qgjcl2slvh6xytud9zil2wdkb6wouot3b3pjqv4bbtmkrkxg8s8rnqfc29zczoqzpwjmyyv4g8agp0ry4w4aiduux6rkgyjpv0vvwngk9ppnwf6n8ld2si5spo6tggmnrs9jb1btum0gc3jyxbea2hbpqm1suu1tfy7jynrtjvuozevbr8elvvj9nrmhosrdsi3k76afcxlx99i0pmluu47umn4n95cn3og84u4iju4fnwuqatqkl6srfpw499p5nsup9byu39qcy1xfg5bcojtdmw4p42lzgii3msjzcdbj52crpjfykdjdjo6b1ut0ggcdw88eszeppki4kdu83gks80pkaa5g5evfi0uqognhdu2m4p0y2bwpzy4b4gsbzhxa748vcod == \2\f\n\r\u\v\z\o\j\q\o\5\l\q\m\a\o\y\9\o\r\k\s\a\4\9\k\o\w\g\0\p\j\9\f\1\7\4\0\d\9\h\j\6\6\z\u\4\s\t\s\v\s\k\f\r\l\3\n\l\x\w\4\6\s\a\2\q\e\7\i\9\r\8\z\5\t\u\6\3\d\p\e\2\y\q\o\k\9\d\x\f\k\x\8\f\4\u\2\4\s\t\m\m\7\v\o\h\k\f\8\v\y\v\t\g\1\q\g\j\c\l\2\s\l\v\h\6\x\y\t\u\d\9\z\i\l\2\w\d\k\b\6\w\o\u\o\t\3\b\3\p\j\q\v\4\b\b\t\m\k\r\k\x\g\8\s\8\r\n\q\f\c\2\9\z\c\z\o\q\z\p\w\j\m\y\y\v\4\g\8\a\g\p\0\r\y\4\w\4\a\i\d\u\u\x\6\r\k\g\y\j\p\v\0\v\v\w\n\g\k\9\p\p\n\w\f\6\n\8\l\d\2\s\i\5\s\p\o\6\t\g\g\m\n\r\s\9\j\b\1\b\t\u\m\0\g\c\3\j\y\x\b\e\a\2\h\b\p\q\m\1\s\u\u\1\t\f\y\7\j\y\n\r\t\j\v\u\o\z\e\v\b\r\8\e\l\v\v\j\9\n\r\m\h\o\s\r\d\s\i\3\k\7\6\a\f\c\x\l\x\9\9\i\0\p\m\l\u\u\4\7\u\m\n\4\n\9\5\c\n\3\o\g\8\4\u\4\i\j\u\4\f\n\w\u\q\a\t\q\k\l\6\s\r\f\p\w\4\9\9\p\5\n\s\u\p\9\b\y\u\3\9\q\c\y\1\x\f\g\5\b\c\o\j\t\d\m\w\4\p\4\2\l\z\g\i\i\3\m\s\j\z\c\d\b\j\5\2\c\r\p\j\f\y\k\d\j\d\j\o\6\b\1\u\t\0\g\g\c\d\w\8\8\e\s\z\e\p\p\k\i\4\k\d\u\8\3\g\k\s\8\0\p\k\a\a\5\g\5\e\v\f\i\0\u\q\o\g\n\h\d\u\2\m\4\p\0\y\2\b\w\p\z\y\4\b\4\g\s\b\z\h\x\a\7\4\8\v\c\o\d ]] 00:26:47.587 16:44:18 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:47.587 16:44:18 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:47.587 16:44:18 -- dd/common.sh@98 -- # xtrace_disable 00:26:47.587 16:44:18 -- common/autotest_common.sh@10 -- # set +x 00:26:47.587 16:44:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:47.587 16:44:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:47.587 [2024-07-13 16:44:18.981559] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:47.587 [2024-07-13 16:44:18.981757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144947 ] 00:26:47.847 [2024-07-13 16:44:19.124343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.847 [2024-07-13 16:44:19.206173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.366  Copying: 512/512 [B] (average 500 kBps) 00:26:48.366 00:26:48.366 16:44:19 -- dd/posix.sh@93 -- # [[ fcc4ai8dcc58r6n8sv84r54qqnisjrgbcwuaz88bj6j3ozla3lkhl4a6w0xyz7m8cahq9xi4egv50dr32ejyawlqjaee5bkhunsxhgdmdu90f1pk3i7lgkg1q6tr4l18wd4c0gbird9hjh2isbh66gzgop8vfwh23ib2z2gdgofglfkmlzhgdneeepr4zdp5zhnnkbwu6qywi5hci9ry4jozdd6izlbu0tefgs5yn7711ururzmx39dtj7rglhtcs09hiqkf2ibplgy35ew2xbpdip0t9kxxazctdwqacunp1lafmwssrba5l1a6za1st07byw2z26ttsvlwh3a53rroxx1ov27ov0mq14e5swmxabf1ww20aj351k5wxxuodr9fnawz3mdh00y3bi1rzjpr0384ga7ph5hwx7rx5gctubs08hyoh6copgl5og1b9nltehjvzm7otdz01hhlij38rgau48yyutcsa3uig4inwhy179s10uf1m5vgnvvi == \f\c\c\4\a\i\8\d\c\c\5\8\r\6\n\8\s\v\8\4\r\5\4\q\q\n\i\s\j\r\g\b\c\w\u\a\z\8\8\b\j\6\j\3\o\z\l\a\3\l\k\h\l\4\a\6\w\0\x\y\z\7\m\8\c\a\h\q\9\x\i\4\e\g\v\5\0\d\r\3\2\e\j\y\a\w\l\q\j\a\e\e\5\b\k\h\u\n\s\x\h\g\d\m\d\u\9\0\f\1\p\k\3\i\7\l\g\k\g\1\q\6\t\r\4\l\1\8\w\d\4\c\0\g\b\i\r\d\9\h\j\h\2\i\s\b\h\6\6\g\z\g\o\p\8\v\f\w\h\2\3\i\b\2\z\2\g\d\g\o\f\g\l\f\k\m\l\z\h\g\d\n\e\e\e\p\r\4\z\d\p\5\z\h\n\n\k\b\w\u\6\q\y\w\i\5\h\c\i\9\r\y\4\j\o\z\d\d\6\i\z\l\b\u\0\t\e\f\g\s\5\y\n\7\7\1\1\u\r\u\r\z\m\x\3\9\d\t\j\7\r\g\l\h\t\c\s\0\9\h\i\q\k\f\2\i\b\p\l\g\y\3\5\e\w\2\x\b\p\d\i\p\0\t\9\k\x\x\a\z\c\t\d\w\q\a\c\u\n\p\1\l\a\f\m\w\s\s\r\b\a\5\l\1\a\6\z\a\1\s\t\0\7\b\y\w\2\z\2\6\t\t\s\v\l\w\h\3\a\5\3\r\r\o\x\x\1\o\v\2\7\o\v\0\m\q\1\4\e\5\s\w\m\x\a\b\f\1\w\w\2\0\a\j\3\5\1\k\5\w\x\x\u\o\d\r\9\f\n\a\w\z\3\m\d\h\0\0\y\3\b\i\1\r\z\j\p\r\0\3\8\4\g\a\7\p\h\5\h\w\x\7\r\x\5\g\c\t\u\b\s\0\8\h\y\o\h\6\c\o\p\g\l\5\o\g\1\b\9\n\l\t\e\h\j\v\z\m\7\o\t\d\z\0\1\h\h\l\i\j\3\8\r\g\a\u\4\8\y\y\u\t\c\s\a\3\u\i\g\4\i\n\w\h\y\1\7\9\s\1\0\u\f\1\m\5\v\g\n\v\v\i ]] 00:26:48.366 16:44:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:48.366 16:44:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:48.366 [2024-07-13 16:44:19.819753] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:48.366 [2024-07-13 16:44:19.819946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144960 ] 00:26:48.625 [2024-07-13 16:44:19.963368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.625 [2024-07-13 16:44:20.055263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.144  Copying: 512/512 [B] (average 500 kBps) 00:26:49.144 00:26:49.144 16:44:20 -- dd/posix.sh@93 -- # [[ fcc4ai8dcc58r6n8sv84r54qqnisjrgbcwuaz88bj6j3ozla3lkhl4a6w0xyz7m8cahq9xi4egv50dr32ejyawlqjaee5bkhunsxhgdmdu90f1pk3i7lgkg1q6tr4l18wd4c0gbird9hjh2isbh66gzgop8vfwh23ib2z2gdgofglfkmlzhgdneeepr4zdp5zhnnkbwu6qywi5hci9ry4jozdd6izlbu0tefgs5yn7711ururzmx39dtj7rglhtcs09hiqkf2ibplgy35ew2xbpdip0t9kxxazctdwqacunp1lafmwssrba5l1a6za1st07byw2z26ttsvlwh3a53rroxx1ov27ov0mq14e5swmxabf1ww20aj351k5wxxuodr9fnawz3mdh00y3bi1rzjpr0384ga7ph5hwx7rx5gctubs08hyoh6copgl5og1b9nltehjvzm7otdz01hhlij38rgau48yyutcsa3uig4inwhy179s10uf1m5vgnvvi == \f\c\c\4\a\i\8\d\c\c\5\8\r\6\n\8\s\v\8\4\r\5\4\q\q\n\i\s\j\r\g\b\c\w\u\a\z\8\8\b\j\6\j\3\o\z\l\a\3\l\k\h\l\4\a\6\w\0\x\y\z\7\m\8\c\a\h\q\9\x\i\4\e\g\v\5\0\d\r\3\2\e\j\y\a\w\l\q\j\a\e\e\5\b\k\h\u\n\s\x\h\g\d\m\d\u\9\0\f\1\p\k\3\i\7\l\g\k\g\1\q\6\t\r\4\l\1\8\w\d\4\c\0\g\b\i\r\d\9\h\j\h\2\i\s\b\h\6\6\g\z\g\o\p\8\v\f\w\h\2\3\i\b\2\z\2\g\d\g\o\f\g\l\f\k\m\l\z\h\g\d\n\e\e\e\p\r\4\z\d\p\5\z\h\n\n\k\b\w\u\6\q\y\w\i\5\h\c\i\9\r\y\4\j\o\z\d\d\6\i\z\l\b\u\0\t\e\f\g\s\5\y\n\7\7\1\1\u\r\u\r\z\m\x\3\9\d\t\j\7\r\g\l\h\t\c\s\0\9\h\i\q\k\f\2\i\b\p\l\g\y\3\5\e\w\2\x\b\p\d\i\p\0\t\9\k\x\x\a\z\c\t\d\w\q\a\c\u\n\p\1\l\a\f\m\w\s\s\r\b\a\5\l\1\a\6\z\a\1\s\t\0\7\b\y\w\2\z\2\6\t\t\s\v\l\w\h\3\a\5\3\r\r\o\x\x\1\o\v\2\7\o\v\0\m\q\1\4\e\5\s\w\m\x\a\b\f\1\w\w\2\0\a\j\3\5\1\k\5\w\x\x\u\o\d\r\9\f\n\a\w\z\3\m\d\h\0\0\y\3\b\i\1\r\z\j\p\r\0\3\8\4\g\a\7\p\h\5\h\w\x\7\r\x\5\g\c\t\u\b\s\0\8\h\y\o\h\6\c\o\p\g\l\5\o\g\1\b\9\n\l\t\e\h\j\v\z\m\7\o\t\d\z\0\1\h\h\l\i\j\3\8\r\g\a\u\4\8\y\y\u\t\c\s\a\3\u\i\g\4\i\n\w\h\y\1\7\9\s\1\0\u\f\1\m\5\v\g\n\v\v\i ]] 00:26:49.144 16:44:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:49.144 16:44:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:49.404 [2024-07-13 16:44:20.663443] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:49.404 [2024-07-13 16:44:20.663635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144977 ] 00:26:49.404 [2024-07-13 16:44:20.807536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.664 [2024-07-13 16:44:20.893938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.234  Copying: 512/512 [B] (average 166 kBps) 00:26:50.234 00:26:50.234 16:44:21 -- dd/posix.sh@93 -- # [[ fcc4ai8dcc58r6n8sv84r54qqnisjrgbcwuaz88bj6j3ozla3lkhl4a6w0xyz7m8cahq9xi4egv50dr32ejyawlqjaee5bkhunsxhgdmdu90f1pk3i7lgkg1q6tr4l18wd4c0gbird9hjh2isbh66gzgop8vfwh23ib2z2gdgofglfkmlzhgdneeepr4zdp5zhnnkbwu6qywi5hci9ry4jozdd6izlbu0tefgs5yn7711ururzmx39dtj7rglhtcs09hiqkf2ibplgy35ew2xbpdip0t9kxxazctdwqacunp1lafmwssrba5l1a6za1st07byw2z26ttsvlwh3a53rroxx1ov27ov0mq14e5swmxabf1ww20aj351k5wxxuodr9fnawz3mdh00y3bi1rzjpr0384ga7ph5hwx7rx5gctubs08hyoh6copgl5og1b9nltehjvzm7otdz01hhlij38rgau48yyutcsa3uig4inwhy179s10uf1m5vgnvvi == \f\c\c\4\a\i\8\d\c\c\5\8\r\6\n\8\s\v\8\4\r\5\4\q\q\n\i\s\j\r\g\b\c\w\u\a\z\8\8\b\j\6\j\3\o\z\l\a\3\l\k\h\l\4\a\6\w\0\x\y\z\7\m\8\c\a\h\q\9\x\i\4\e\g\v\5\0\d\r\3\2\e\j\y\a\w\l\q\j\a\e\e\5\b\k\h\u\n\s\x\h\g\d\m\d\u\9\0\f\1\p\k\3\i\7\l\g\k\g\1\q\6\t\r\4\l\1\8\w\d\4\c\0\g\b\i\r\d\9\h\j\h\2\i\s\b\h\6\6\g\z\g\o\p\8\v\f\w\h\2\3\i\b\2\z\2\g\d\g\o\f\g\l\f\k\m\l\z\h\g\d\n\e\e\e\p\r\4\z\d\p\5\z\h\n\n\k\b\w\u\6\q\y\w\i\5\h\c\i\9\r\y\4\j\o\z\d\d\6\i\z\l\b\u\0\t\e\f\g\s\5\y\n\7\7\1\1\u\r\u\r\z\m\x\3\9\d\t\j\7\r\g\l\h\t\c\s\0\9\h\i\q\k\f\2\i\b\p\l\g\y\3\5\e\w\2\x\b\p\d\i\p\0\t\9\k\x\x\a\z\c\t\d\w\q\a\c\u\n\p\1\l\a\f\m\w\s\s\r\b\a\5\l\1\a\6\z\a\1\s\t\0\7\b\y\w\2\z\2\6\t\t\s\v\l\w\h\3\a\5\3\r\r\o\x\x\1\o\v\2\7\o\v\0\m\q\1\4\e\5\s\w\m\x\a\b\f\1\w\w\2\0\a\j\3\5\1\k\5\w\x\x\u\o\d\r\9\f\n\a\w\z\3\m\d\h\0\0\y\3\b\i\1\r\z\j\p\r\0\3\8\4\g\a\7\p\h\5\h\w\x\7\r\x\5\g\c\t\u\b\s\0\8\h\y\o\h\6\c\o\p\g\l\5\o\g\1\b\9\n\l\t\e\h\j\v\z\m\7\o\t\d\z\0\1\h\h\l\i\j\3\8\r\g\a\u\4\8\y\y\u\t\c\s\a\3\u\i\g\4\i\n\w\h\y\1\7\9\s\1\0\u\f\1\m\5\v\g\n\v\v\i ]] 00:26:50.234 16:44:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:50.234 16:44:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:50.234 [2024-07-13 16:44:21.549419] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:50.234 [2024-07-13 16:44:21.549695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144994 ] 00:26:50.234 [2024-07-13 16:44:21.703773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.493 [2024-07-13 16:44:21.788256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.062  Copying: 512/512 [B] (average 166 kBps) 00:26:51.062 00:26:51.062 16:44:22 -- dd/posix.sh@93 -- # [[ fcc4ai8dcc58r6n8sv84r54qqnisjrgbcwuaz88bj6j3ozla3lkhl4a6w0xyz7m8cahq9xi4egv50dr32ejyawlqjaee5bkhunsxhgdmdu90f1pk3i7lgkg1q6tr4l18wd4c0gbird9hjh2isbh66gzgop8vfwh23ib2z2gdgofglfkmlzhgdneeepr4zdp5zhnnkbwu6qywi5hci9ry4jozdd6izlbu0tefgs5yn7711ururzmx39dtj7rglhtcs09hiqkf2ibplgy35ew2xbpdip0t9kxxazctdwqacunp1lafmwssrba5l1a6za1st07byw2z26ttsvlwh3a53rroxx1ov27ov0mq14e5swmxabf1ww20aj351k5wxxuodr9fnawz3mdh00y3bi1rzjpr0384ga7ph5hwx7rx5gctubs08hyoh6copgl5og1b9nltehjvzm7otdz01hhlij38rgau48yyutcsa3uig4inwhy179s10uf1m5vgnvvi == \f\c\c\4\a\i\8\d\c\c\5\8\r\6\n\8\s\v\8\4\r\5\4\q\q\n\i\s\j\r\g\b\c\w\u\a\z\8\8\b\j\6\j\3\o\z\l\a\3\l\k\h\l\4\a\6\w\0\x\y\z\7\m\8\c\a\h\q\9\x\i\4\e\g\v\5\0\d\r\3\2\e\j\y\a\w\l\q\j\a\e\e\5\b\k\h\u\n\s\x\h\g\d\m\d\u\9\0\f\1\p\k\3\i\7\l\g\k\g\1\q\6\t\r\4\l\1\8\w\d\4\c\0\g\b\i\r\d\9\h\j\h\2\i\s\b\h\6\6\g\z\g\o\p\8\v\f\w\h\2\3\i\b\2\z\2\g\d\g\o\f\g\l\f\k\m\l\z\h\g\d\n\e\e\e\p\r\4\z\d\p\5\z\h\n\n\k\b\w\u\6\q\y\w\i\5\h\c\i\9\r\y\4\j\o\z\d\d\6\i\z\l\b\u\0\t\e\f\g\s\5\y\n\7\7\1\1\u\r\u\r\z\m\x\3\9\d\t\j\7\r\g\l\h\t\c\s\0\9\h\i\q\k\f\2\i\b\p\l\g\y\3\5\e\w\2\x\b\p\d\i\p\0\t\9\k\x\x\a\z\c\t\d\w\q\a\c\u\n\p\1\l\a\f\m\w\s\s\r\b\a\5\l\1\a\6\z\a\1\s\t\0\7\b\y\w\2\z\2\6\t\t\s\v\l\w\h\3\a\5\3\r\r\o\x\x\1\o\v\2\7\o\v\0\m\q\1\4\e\5\s\w\m\x\a\b\f\1\w\w\2\0\a\j\3\5\1\k\5\w\x\x\u\o\d\r\9\f\n\a\w\z\3\m\d\h\0\0\y\3\b\i\1\r\z\j\p\r\0\3\8\4\g\a\7\p\h\5\h\w\x\7\r\x\5\g\c\t\u\b\s\0\8\h\y\o\h\6\c\o\p\g\l\5\o\g\1\b\9\n\l\t\e\h\j\v\z\m\7\o\t\d\z\0\1\h\h\l\i\j\3\8\r\g\a\u\4\8\y\y\u\t\c\s\a\3\u\i\g\4\i\n\w\h\y\1\7\9\s\1\0\u\f\1\m\5\v\g\n\v\v\i ]] 00:26:51.062 00:26:51.062 real 0m6.849s 00:26:51.062 user 0m3.550s 00:26:51.062 sys 0m2.181s 00:26:51.062 ************************************ 00:26:51.062 END TEST dd_flags_misc_forced_aio 00:26:51.062 ************************************ 00:26:51.062 16:44:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:51.062 16:44:22 -- common/autotest_common.sh@10 -- # set +x 00:26:51.062 16:44:22 -- dd/posix.sh@1 -- # cleanup 00:26:51.062 16:44:22 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:51.062 16:44:22 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:51.062 00:26:51.062 real 0m29.545s 00:26:51.062 user 0m14.389s 00:26:51.062 sys 0m8.998s 00:26:51.062 16:44:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:51.062 ************************************ 00:26:51.062 END TEST spdk_dd_posix 00:26:51.062 ************************************ 00:26:51.062 16:44:22 -- common/autotest_common.sh@10 -- # set +x 00:26:51.062 16:44:22 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:26:51.062 16:44:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:51.062 16:44:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:51.062 16:44:22 -- common/autotest_common.sh@10 -- # set +x 00:26:51.062 ************************************ 00:26:51.062 START TEST spdk_dd_malloc 00:26:51.062 ************************************ 00:26:51.062 16:44:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:26:51.322 * Looking for test storage... 00:26:51.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:51.322 16:44:22 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:51.322 16:44:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.322 16:44:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.322 16:44:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.322 16:44:22 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:51.322 16:44:22 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:51.322 16:44:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:51.322 16:44:22 -- paths/export.sh@5 -- # export PATH 00:26:51.322 16:44:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:51.322 16:44:22 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:26:51.322 16:44:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:51.322 16:44:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:51.322 16:44:22 -- common/autotest_common.sh@10 -- # set +x 00:26:51.322 ************************************ 00:26:51.322 START TEST dd_malloc_copy 00:26:51.322 ************************************ 00:26:51.322 16:44:22 -- common/autotest_common.sh@1104 -- # malloc_copy 00:26:51.322 16:44:22 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:26:51.322 16:44:22 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:26:51.322 16:44:22 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:26:51.322 16:44:22 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:26:51.322 16:44:22 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:26:51.322 16:44:22 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:26:51.322 16:44:22 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:26:51.322 16:44:22 -- dd/malloc.sh@28 -- # gen_conf 00:26:51.322 16:44:22 -- dd/common.sh@31 -- # xtrace_disable 00:26:51.322 16:44:22 -- common/autotest_common.sh@10 -- # set +x 00:26:51.322 { 00:26:51.322 "subsystems": [ 00:26:51.322 { 00:26:51.322 "subsystem": "bdev", 00:26:51.322 "config": [ 00:26:51.322 { 00:26:51.322 "params": { 00:26:51.322 "block_size": 512, 00:26:51.322 "num_blocks": 1048576, 00:26:51.322 "name": "malloc0" 00:26:51.322 }, 00:26:51.322 "method": "bdev_malloc_create" 00:26:51.322 }, 00:26:51.322 { 00:26:51.322 "params": { 00:26:51.322 "block_size": 512, 00:26:51.322 "num_blocks": 1048576, 00:26:51.322 "name": "malloc1" 00:26:51.322 }, 00:26:51.322 "method": "bdev_malloc_create" 00:26:51.322 }, 00:26:51.322 { 00:26:51.322 "method": "bdev_wait_for_examine" 00:26:51.322 } 00:26:51.322 ] 00:26:51.322 } 00:26:51.322 ] 00:26:51.322 } 00:26:51.322 [2024-07-13 16:44:22.629309] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:51.323 [2024-07-13 16:44:22.629588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145079 ] 00:26:51.323 [2024-07-13 16:44:22.784698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.580 [2024-07-13 16:44:22.862486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.262  Copying: 236/512 [MB] (236 MBps) Copying: 471/512 [MB] (235 MBps) Copying: 512/512 [MB] (average 235 MBps) 00:26:55.262 00:26:55.262 16:44:26 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:26:55.262 16:44:26 -- dd/malloc.sh@33 -- # gen_conf 00:26:55.262 16:44:26 -- dd/common.sh@31 -- # xtrace_disable 00:26:55.262 16:44:26 -- common/autotest_common.sh@10 -- # set +x 00:26:55.262 [2024-07-13 16:44:26.594280] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:55.262 [2024-07-13 16:44:26.594481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145137 ] 00:26:55.262 { 00:26:55.262 "subsystems": [ 00:26:55.262 { 00:26:55.262 "subsystem": "bdev", 00:26:55.262 "config": [ 00:26:55.262 { 00:26:55.262 "params": { 00:26:55.262 "block_size": 512, 00:26:55.262 "num_blocks": 1048576, 00:26:55.262 "name": "malloc0" 00:26:55.262 }, 00:26:55.262 "method": "bdev_malloc_create" 00:26:55.262 }, 00:26:55.262 { 00:26:55.262 "params": { 00:26:55.262 "block_size": 512, 00:26:55.262 "num_blocks": 1048576, 00:26:55.262 "name": "malloc1" 00:26:55.262 }, 00:26:55.262 "method": "bdev_malloc_create" 00:26:55.262 }, 00:26:55.262 { 00:26:55.262 "method": "bdev_wait_for_examine" 00:26:55.262 } 00:26:55.262 ] 00:26:55.262 } 00:26:55.262 ] 00:26:55.262 } 00:26:55.520 [2024-07-13 16:44:26.739620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.520 [2024-07-13 16:44:26.807974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.202  Copying: 236/512 [MB] (236 MBps) Copying: 471/512 [MB] (235 MBps) Copying: 512/512 [MB] (average 235 MBps) 00:26:59.202 00:26:59.202 00:26:59.202 real 0m7.924s 00:26:59.202 user 0m6.414s 00:26:59.202 sys 0m1.378s 00:26:59.202 16:44:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.202 ************************************ 00:26:59.202 END TEST dd_malloc_copy 00:26:59.202 ************************************ 00:26:59.202 16:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:59.202 00:26:59.202 real 0m8.092s 00:26:59.202 user 0m6.488s 00:26:59.202 sys 0m1.482s 00:26:59.202 16:44:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.202 16:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:59.202 ************************************ 00:26:59.202 END TEST spdk_dd_malloc 00:26:59.202 ************************************ 00:26:59.202 16:44:30 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:26:59.202 16:44:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:59.202 16:44:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:59.202 16:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:59.202 ************************************ 00:26:59.202 START TEST spdk_dd_bdev_to_bdev 00:26:59.202 ************************************ 00:26:59.202 16:44:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:26:59.460 * Looking for test storage... 00:26:59.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:59.460 16:44:30 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:59.460 16:44:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.460 16:44:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.460 16:44:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.460 16:44:30 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:59.460 16:44:30 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:59.460 16:44:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:59.460 16:44:30 -- paths/export.sh@5 -- # export PATH 00:26:59.460 16:44:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:26:59.460 16:44:30 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:26:59.460 [2024-07-13 16:44:30.772211] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:59.460 [2024-07-13 16:44:30.772512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145247 ] 00:26:59.460 [2024-07-13 16:44:30.925256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.719 [2024-07-13 16:44:30.993641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.545  Copying: 256/256 [MB] (average 996 MBps) 00:27:00.545 00:27:00.545 16:44:31 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:00.545 16:44:31 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:00.545 16:44:31 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:27:00.545 16:44:31 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:27:00.545 16:44:31 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:27:00.545 16:44:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:27:00.545 16:44:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:00.545 16:44:31 -- common/autotest_common.sh@10 -- # set +x 00:27:00.545 ************************************ 00:27:00.545 START TEST dd_inflate_file 00:27:00.545 ************************************ 00:27:00.545 16:44:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:27:00.545 [2024-07-13 16:44:31.872411] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:00.546 [2024-07-13 16:44:31.872720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145270 ] 00:27:00.805 [2024-07-13 16:44:32.027678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.805 [2024-07-13 16:44:32.094791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.373  Copying: 64/64 [MB] (average 1015 MBps) 00:27:01.373 00:27:01.373 00:27:01.373 real 0m0.909s 00:27:01.373 user 0m0.426s 00:27:01.373 sys 0m0.337s 00:27:01.373 ************************************ 00:27:01.373 END TEST dd_inflate_file 00:27:01.373 ************************************ 00:27:01.373 16:44:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.373 16:44:32 -- common/autotest_common.sh@10 -- # set +x 00:27:01.373 16:44:32 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:27:01.373 16:44:32 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:27:01.373 16:44:32 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:27:01.373 16:44:32 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:27:01.373 16:44:32 -- dd/common.sh@31 -- # xtrace_disable 00:27:01.373 16:44:32 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:27:01.373 16:44:32 -- common/autotest_common.sh@10 -- # set +x 00:27:01.373 16:44:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:01.373 16:44:32 -- common/autotest_common.sh@10 -- # set +x 00:27:01.373 ************************************ 00:27:01.373 START TEST dd_copy_to_out_bdev 00:27:01.373 ************************************ 00:27:01.373 16:44:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:27:01.373 { 00:27:01.373 "subsystems": [ 00:27:01.373 { 00:27:01.373 "subsystem": "bdev", 00:27:01.373 "config": [ 00:27:01.373 { 00:27:01.373 "params": { 00:27:01.373 "block_size": 4096, 00:27:01.373 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:01.373 "name": "aio1" 00:27:01.373 }, 00:27:01.373 "method": "bdev_aio_create" 00:27:01.373 }, 00:27:01.373 { 00:27:01.373 "params": { 00:27:01.373 "trtype": "pcie", 00:27:01.373 "traddr": "0000:00:06.0", 00:27:01.373 "name": "Nvme0" 00:27:01.373 }, 00:27:01.373 "method": "bdev_nvme_attach_controller" 00:27:01.373 }, 00:27:01.373 { 00:27:01.373 "method": "bdev_wait_for_examine" 00:27:01.373 } 00:27:01.373 ] 00:27:01.373 } 00:27:01.373 ] 00:27:01.373 } 00:27:01.631 [2024-07-13 16:44:32.848473] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:01.631 [2024-07-13 16:44:32.848763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145318 ] 00:27:01.631 [2024-07-13 16:44:33.001632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.631 [2024-07-13 16:44:33.067488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.265  Copying: 64/64 [MB] (average 74 MBps) 00:27:03.265 00:27:03.265 00:27:03.265 real 0m1.858s 00:27:03.265 user 0m1.414s 00:27:03.265 sys 0m0.326s 00:27:03.265 ************************************ 00:27:03.265 END TEST dd_copy_to_out_bdev 00:27:03.265 ************************************ 00:27:03.265 16:44:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.265 16:44:34 -- common/autotest_common.sh@10 -- # set +x 00:27:03.265 16:44:34 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:27:03.265 16:44:34 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:27:03.265 16:44:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:03.265 16:44:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:03.265 16:44:34 -- common/autotest_common.sh@10 -- # set +x 00:27:03.265 ************************************ 00:27:03.265 START TEST dd_offset_magic 00:27:03.265 ************************************ 00:27:03.265 16:44:34 -- common/autotest_common.sh@1104 -- # offset_magic 00:27:03.265 16:44:34 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:27:03.265 16:44:34 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:27:03.265 16:44:34 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:27:03.265 16:44:34 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:27:03.265 16:44:34 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:27:03.265 16:44:34 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:27:03.265 16:44:34 -- dd/common.sh@31 -- # xtrace_disable 00:27:03.265 16:44:34 -- common/autotest_common.sh@10 -- # set +x 00:27:03.524 [2024-07-13 16:44:34.761887] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:03.524 [2024-07-13 16:44:34.762093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145364 ] 00:27:03.524 { 00:27:03.524 "subsystems": [ 00:27:03.524 { 00:27:03.524 "subsystem": "bdev", 00:27:03.524 "config": [ 00:27:03.524 { 00:27:03.524 "params": { 00:27:03.524 "block_size": 4096, 00:27:03.524 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:03.524 "name": "aio1" 00:27:03.524 }, 00:27:03.524 "method": "bdev_aio_create" 00:27:03.524 }, 00:27:03.524 { 00:27:03.524 "params": { 00:27:03.524 "trtype": "pcie", 00:27:03.524 "traddr": "0000:00:06.0", 00:27:03.524 "name": "Nvme0" 00:27:03.524 }, 00:27:03.524 "method": "bdev_nvme_attach_controller" 00:27:03.524 }, 00:27:03.524 { 00:27:03.524 "method": "bdev_wait_for_examine" 00:27:03.524 } 00:27:03.524 ] 00:27:03.524 } 00:27:03.524 ] 00:27:03.524 } 00:27:03.524 [2024-07-13 16:44:34.907703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.524 [2024-07-13 16:44:34.981891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.775  Copying: 65/65 [MB] (average 140 MBps) 00:27:04.775 00:27:04.775 16:44:36 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:27:04.775 16:44:36 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:27:04.775 16:44:36 -- dd/common.sh@31 -- # xtrace_disable 00:27:04.775 16:44:36 -- common/autotest_common.sh@10 -- # set +x 00:27:04.775 { 00:27:04.775 "subsystems": [ 00:27:04.775 { 00:27:04.775 "subsystem": "bdev", 00:27:04.775 "config": [ 00:27:04.775 { 00:27:04.775 "params": { 00:27:04.775 "block_size": 4096, 00:27:04.775 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:04.775 "name": "aio1" 00:27:04.775 }, 00:27:04.775 "method": "bdev_aio_create" 00:27:04.775 }, 00:27:04.775 { 00:27:04.775 "params": { 00:27:04.775 "trtype": "pcie", 00:27:04.775 "traddr": "0000:00:06.0", 00:27:04.775 "name": "Nvme0" 00:27:04.775 }, 00:27:04.775 "method": "bdev_nvme_attach_controller" 00:27:04.775 }, 00:27:04.775 { 00:27:04.775 "method": "bdev_wait_for_examine" 00:27:04.775 } 00:27:04.775 ] 00:27:04.775 } 00:27:04.775 ] 00:27:04.775 } 00:27:04.775 [2024-07-13 16:44:36.242367] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:04.775 [2024-07-13 16:44:36.242640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145396 ] 00:27:05.034 [2024-07-13 16:44:36.403583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.034 [2024-07-13 16:44:36.481066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.861  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:05.861 00:27:05.861 16:44:37 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:27:05.861 16:44:37 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:27:05.861 16:44:37 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:27:05.861 16:44:37 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:27:05.861 16:44:37 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:27:05.861 16:44:37 -- dd/common.sh@31 -- # xtrace_disable 00:27:05.861 16:44:37 -- common/autotest_common.sh@10 -- # set +x 00:27:05.861 [2024-07-13 16:44:37.268147] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:05.861 [2024-07-13 16:44:37.268385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145414 ] 00:27:05.861 { 00:27:05.861 "subsystems": [ 00:27:05.861 { 00:27:05.861 "subsystem": "bdev", 00:27:05.861 "config": [ 00:27:05.861 { 00:27:05.861 "params": { 00:27:05.861 "block_size": 4096, 00:27:05.861 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:05.861 "name": "aio1" 00:27:05.861 }, 00:27:05.861 "method": "bdev_aio_create" 00:27:05.861 }, 00:27:05.861 { 00:27:05.861 "params": { 00:27:05.861 "trtype": "pcie", 00:27:05.861 "traddr": "0000:00:06.0", 00:27:05.861 "name": "Nvme0" 00:27:05.861 }, 00:27:05.861 "method": "bdev_nvme_attach_controller" 00:27:05.861 }, 00:27:05.861 { 00:27:05.861 "method": "bdev_wait_for_examine" 00:27:05.861 } 00:27:05.861 ] 00:27:05.861 } 00:27:05.861 ] 00:27:05.861 } 00:27:06.119 [2024-07-13 16:44:37.412136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.119 [2024-07-13 16:44:37.489881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.256  Copying: 65/65 [MB] (average 175 MBps) 00:27:07.256 00:27:07.256 16:44:38 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:27:07.256 16:44:38 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:27:07.256 16:44:38 -- dd/common.sh@31 -- # xtrace_disable 00:27:07.256 16:44:38 -- common/autotest_common.sh@10 -- # set +x 00:27:07.256 { 00:27:07.256 "subsystems": [ 00:27:07.256 { 00:27:07.256 "subsystem": "bdev", 00:27:07.256 "config": [ 00:27:07.256 { 00:27:07.256 "params": { 00:27:07.257 "block_size": 4096, 00:27:07.257 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:07.257 "name": "aio1" 00:27:07.257 }, 00:27:07.257 "method": "bdev_aio_create" 00:27:07.257 }, 00:27:07.257 { 00:27:07.257 "params": { 00:27:07.257 "trtype": "pcie", 00:27:07.257 "traddr": "0000:00:06.0", 00:27:07.257 "name": "Nvme0" 00:27:07.257 }, 00:27:07.257 "method": "bdev_nvme_attach_controller" 00:27:07.257 }, 00:27:07.257 { 00:27:07.257 "method": "bdev_wait_for_examine" 00:27:07.257 } 00:27:07.257 ] 00:27:07.257 } 00:27:07.257 ] 00:27:07.257 } 00:27:07.257 [2024-07-13 16:44:38.584623] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:07.257 [2024-07-13 16:44:38.584887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145436 ] 00:27:07.516 [2024-07-13 16:44:38.740882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.516 [2024-07-13 16:44:38.817759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.342  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:08.342 00:27:08.342 16:44:39 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:27:08.342 ************************************ 00:27:08.342 END TEST dd_offset_magic 00:27:08.342 ************************************ 00:27:08.342 16:44:39 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:27:08.342 00:27:08.342 real 0m4.816s 00:27:08.342 user 0m2.369s 00:27:08.342 sys 0m1.265s 00:27:08.342 16:44:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:08.342 16:44:39 -- common/autotest_common.sh@10 -- # set +x 00:27:08.342 16:44:39 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:27:08.342 16:44:39 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:27:08.342 16:44:39 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:08.342 16:44:39 -- dd/common.sh@11 -- # local nvme_ref= 00:27:08.342 16:44:39 -- dd/common.sh@12 -- # local size=4194330 00:27:08.342 16:44:39 -- dd/common.sh@14 -- # local bs=1048576 00:27:08.342 16:44:39 -- dd/common.sh@15 -- # local count=5 00:27:08.342 16:44:39 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:27:08.342 16:44:39 -- dd/common.sh@18 -- # gen_conf 00:27:08.342 16:44:39 -- dd/common.sh@31 -- # xtrace_disable 00:27:08.342 16:44:39 -- common/autotest_common.sh@10 -- # set +x 00:27:08.342 { 00:27:08.342 "subsystems": [ 00:27:08.342 { 00:27:08.342 "subsystem": "bdev", 00:27:08.342 "config": [ 00:27:08.342 { 00:27:08.342 "params": { 00:27:08.342 "block_size": 4096, 00:27:08.342 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:08.342 "name": "aio1" 00:27:08.342 }, 00:27:08.342 "method": "bdev_aio_create" 00:27:08.342 }, 00:27:08.342 { 00:27:08.342 "params": { 00:27:08.342 "trtype": "pcie", 00:27:08.342 "traddr": "0000:00:06.0", 00:27:08.342 "name": "Nvme0" 00:27:08.342 }, 00:27:08.342 "method": "bdev_nvme_attach_controller" 00:27:08.342 }, 00:27:08.342 { 00:27:08.342 "method": "bdev_wait_for_examine" 00:27:08.342 } 00:27:08.342 ] 00:27:08.342 } 00:27:08.342 ] 00:27:08.342 } 00:27:08.342 [2024-07-13 16:44:39.647451] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:08.342 [2024-07-13 16:44:39.647705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145473 ] 00:27:08.342 [2024-07-13 16:44:39.800912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.600 [2024-07-13 16:44:39.871443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.117  Copying: 5120/5120 [kB] (average 1000 MBps) 00:27:09.117 00:27:09.117 16:44:40 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:27:09.117 16:44:40 -- dd/common.sh@10 -- # local bdev=aio1 00:27:09.117 16:44:40 -- dd/common.sh@11 -- # local nvme_ref= 00:27:09.117 16:44:40 -- dd/common.sh@12 -- # local size=4194330 00:27:09.117 16:44:40 -- dd/common.sh@14 -- # local bs=1048576 00:27:09.117 16:44:40 -- dd/common.sh@15 -- # local count=5 00:27:09.117 16:44:40 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:27:09.117 16:44:40 -- dd/common.sh@18 -- # gen_conf 00:27:09.117 16:44:40 -- dd/common.sh@31 -- # xtrace_disable 00:27:09.117 16:44:40 -- common/autotest_common.sh@10 -- # set +x 00:27:09.375 [2024-07-13 16:44:40.604176] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:09.375 [2024-07-13 16:44:40.604428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145495 ] 00:27:09.375 { 00:27:09.375 "subsystems": [ 00:27:09.375 { 00:27:09.375 "subsystem": "bdev", 00:27:09.375 "config": [ 00:27:09.375 { 00:27:09.375 "params": { 00:27:09.375 "block_size": 4096, 00:27:09.375 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:09.375 "name": "aio1" 00:27:09.375 }, 00:27:09.375 "method": "bdev_aio_create" 00:27:09.375 }, 00:27:09.375 { 00:27:09.375 "params": { 00:27:09.375 "trtype": "pcie", 00:27:09.375 "traddr": "0000:00:06.0", 00:27:09.375 "name": "Nvme0" 00:27:09.375 }, 00:27:09.375 "method": "bdev_nvme_attach_controller" 00:27:09.375 }, 00:27:09.375 { 00:27:09.375 "method": "bdev_wait_for_examine" 00:27:09.375 } 00:27:09.375 ] 00:27:09.375 } 00:27:09.375 ] 00:27:09.375 } 00:27:09.375 [2024-07-13 16:44:40.747875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.375 [2024-07-13 16:44:40.819610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.201  Copying: 5120/5120 [kB] (average 263 MBps) 00:27:10.201 00:27:10.201 16:44:41 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:27:10.201 00:27:10.201 real 0m11.009s 00:27:10.201 user 0m5.884s 00:27:10.201 sys 0m3.281s 00:27:10.201 16:44:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.201 ************************************ 00:27:10.201 END TEST spdk_dd_bdev_to_bdev 00:27:10.201 ************************************ 00:27:10.201 16:44:41 -- common/autotest_common.sh@10 -- # set +x 00:27:10.201 16:44:41 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:27:10.201 16:44:41 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:27:10.201 16:44:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:10.201 16:44:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:10.201 16:44:41 -- common/autotest_common.sh@10 -- # set +x 00:27:10.460 ************************************ 00:27:10.460 START TEST spdk_dd_sparse 00:27:10.460 ************************************ 00:27:10.460 16:44:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:27:10.460 * Looking for test storage... 00:27:10.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:10.460 16:44:41 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:10.460 16:44:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.460 16:44:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.460 16:44:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.460 16:44:41 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:10.460 16:44:41 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:10.460 16:44:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:10.460 16:44:41 -- paths/export.sh@5 -- # export PATH 00:27:10.460 16:44:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:10.460 16:44:41 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:27:10.460 16:44:41 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:27:10.460 16:44:41 -- dd/sparse.sh@110 -- # file1=file_zero1 00:27:10.460 16:44:41 -- dd/sparse.sh@111 -- # file2=file_zero2 00:27:10.460 16:44:41 -- dd/sparse.sh@112 -- # file3=file_zero3 00:27:10.460 16:44:41 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:27:10.460 16:44:41 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:27:10.460 16:44:41 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:27:10.460 16:44:41 -- dd/sparse.sh@118 -- # prepare 00:27:10.460 16:44:41 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:27:10.460 16:44:41 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:27:10.460 1+0 records in 00:27:10.460 1+0 records out 00:27:10.460 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0120682 s, 348 MB/s 00:27:10.460 16:44:41 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:27:10.460 1+0 records in 00:27:10.460 1+0 records out 00:27:10.460 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0132433 s, 317 MB/s 00:27:10.460 16:44:41 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:27:10.460 1+0 records in 00:27:10.460 1+0 records out 00:27:10.460 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00698834 s, 600 MB/s 00:27:10.460 16:44:41 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:27:10.460 16:44:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:10.460 16:44:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:10.460 16:44:41 -- common/autotest_common.sh@10 -- # set +x 00:27:10.460 ************************************ 00:27:10.460 START TEST dd_sparse_file_to_file 00:27:10.460 ************************************ 00:27:10.460 16:44:41 -- common/autotest_common.sh@1104 -- # file_to_file 00:27:10.460 16:44:41 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:27:10.460 16:44:41 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:27:10.460 16:44:41 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:10.460 16:44:41 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:27:10.460 16:44:41 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:27:10.461 16:44:41 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:27:10.461 16:44:41 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:27:10.461 16:44:41 -- dd/sparse.sh@41 -- # gen_conf 00:27:10.461 16:44:41 -- dd/common.sh@31 -- # xtrace_disable 00:27:10.461 16:44:41 -- common/autotest_common.sh@10 -- # set +x 00:27:10.461 { 00:27:10.461 "subsystems": [ 00:27:10.461 { 00:27:10.461 "subsystem": "bdev", 00:27:10.461 "config": [ 00:27:10.461 { 00:27:10.461 "params": { 00:27:10.461 "block_size": 4096, 00:27:10.461 "filename": "dd_sparse_aio_disk", 00:27:10.461 "name": "dd_aio" 00:27:10.461 }, 00:27:10.461 "method": "bdev_aio_create" 00:27:10.461 }, 00:27:10.461 { 00:27:10.461 "params": { 00:27:10.461 "lvs_name": "dd_lvstore", 00:27:10.461 "bdev_name": "dd_aio" 00:27:10.461 }, 00:27:10.461 "method": "bdev_lvol_create_lvstore" 00:27:10.461 }, 00:27:10.461 { 00:27:10.461 "method": "bdev_wait_for_examine" 00:27:10.461 } 00:27:10.461 ] 00:27:10.461 } 00:27:10.461 ] 00:27:10.461 } 00:27:10.461 [2024-07-13 16:44:41.917446] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:10.461 [2024-07-13 16:44:41.917687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145578 ] 00:27:10.719 [2024-07-13 16:44:42.072214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.719 [2024-07-13 16:44:42.137951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.545  Copying: 12/36 [MB] (average 750 MBps) 00:27:11.545 00:27:11.545 16:44:42 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:27:11.545 16:44:42 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:27:11.545 16:44:42 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:27:11.545 16:44:42 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:27:11.545 16:44:42 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:11.545 16:44:42 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:27:11.545 16:44:42 -- dd/sparse.sh@52 -- # stat1_b=24576 00:27:11.545 16:44:42 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:27:11.545 16:44:42 -- dd/sparse.sh@53 -- # stat2_b=24576 00:27:11.545 16:44:42 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:11.545 00:27:11.545 real 0m0.994s 00:27:11.545 user 0m0.536s 00:27:11.545 sys 0m0.313s 00:27:11.545 16:44:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:11.545 ************************************ 00:27:11.545 END TEST dd_sparse_file_to_file 00:27:11.545 ************************************ 00:27:11.545 16:44:42 -- common/autotest_common.sh@10 -- # set +x 00:27:11.545 16:44:42 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:27:11.545 16:44:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:11.545 16:44:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:11.545 16:44:42 -- common/autotest_common.sh@10 -- # set +x 00:27:11.545 ************************************ 00:27:11.545 START TEST dd_sparse_file_to_bdev 00:27:11.545 ************************************ 00:27:11.545 16:44:42 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:27:11.545 16:44:42 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:11.545 16:44:42 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:27:11.545 16:44:42 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:27:11.545 16:44:42 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:27:11.545 16:44:42 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:27:11.545 16:44:42 -- dd/sparse.sh@73 -- # gen_conf 00:27:11.545 16:44:42 -- dd/common.sh@31 -- # xtrace_disable 00:27:11.545 16:44:42 -- common/autotest_common.sh@10 -- # set +x 00:27:11.545 { 00:27:11.545 "subsystems": [ 00:27:11.545 { 00:27:11.545 "subsystem": "bdev", 00:27:11.545 "config": [ 00:27:11.545 { 00:27:11.545 "params": { 00:27:11.545 "block_size": 4096, 00:27:11.545 "filename": "dd_sparse_aio_disk", 00:27:11.545 "name": "dd_aio" 00:27:11.545 }, 00:27:11.545 "method": "bdev_aio_create" 00:27:11.545 }, 00:27:11.545 { 00:27:11.545 "params": { 00:27:11.545 "lvs_name": "dd_lvstore", 00:27:11.545 "lvol_name": "dd_lvol", 00:27:11.545 "size": 37748736, 00:27:11.545 "thin_provision": true 00:27:11.545 }, 00:27:11.545 "method": "bdev_lvol_create" 00:27:11.545 }, 00:27:11.545 { 00:27:11.545 "method": "bdev_wait_for_examine" 00:27:11.545 } 00:27:11.545 ] 00:27:11.545 } 00:27:11.545 ] 00:27:11.545 } 00:27:11.545 [2024-07-13 16:44:42.976639] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:11.545 [2024-07-13 16:44:42.976813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145631 ] 00:27:11.803 [2024-07-13 16:44:43.120030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.803 [2024-07-13 16:44:43.188694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.061 [2024-07-13 16:44:43.316705] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:27:12.061  Copying: 12/36 [MB] (average 444 MBps)[2024-07-13 16:44:43.366555] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:27:12.627 00:27:12.628 00:27:12.628 00:27:12.628 real 0m0.934s 00:27:12.628 user 0m0.546s 00:27:12.628 sys 0m0.289s 00:27:12.628 16:44:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.628 16:44:43 -- common/autotest_common.sh@10 -- # set +x 00:27:12.628 ************************************ 00:27:12.628 END TEST dd_sparse_file_to_bdev 00:27:12.628 ************************************ 00:27:12.628 16:44:43 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:27:12.628 16:44:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:12.628 16:44:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:12.628 16:44:43 -- common/autotest_common.sh@10 -- # set +x 00:27:12.628 ************************************ 00:27:12.628 START TEST dd_sparse_bdev_to_file 00:27:12.628 ************************************ 00:27:12.628 16:44:43 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:27:12.628 16:44:43 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:27:12.628 16:44:43 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:27:12.628 16:44:43 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:12.628 16:44:43 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:27:12.628 16:44:43 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:27:12.628 16:44:43 -- dd/sparse.sh@91 -- # gen_conf 00:27:12.628 16:44:43 -- dd/common.sh@31 -- # xtrace_disable 00:27:12.628 16:44:43 -- common/autotest_common.sh@10 -- # set +x 00:27:12.628 { 00:27:12.628 "subsystems": [ 00:27:12.628 { 00:27:12.628 "subsystem": "bdev", 00:27:12.628 "config": [ 00:27:12.628 { 00:27:12.628 "params": { 00:27:12.628 "block_size": 4096, 00:27:12.628 "filename": "dd_sparse_aio_disk", 00:27:12.628 "name": "dd_aio" 00:27:12.628 }, 00:27:12.628 "method": "bdev_aio_create" 00:27:12.628 }, 00:27:12.628 { 00:27:12.628 "method": "bdev_wait_for_examine" 00:27:12.628 } 00:27:12.628 ] 00:27:12.628 } 00:27:12.628 ] 00:27:12.628 } 00:27:12.628 [2024-07-13 16:44:43.974559] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:12.628 [2024-07-13 16:44:43.974812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145676 ] 00:27:12.887 [2024-07-13 16:44:44.128303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.887 [2024-07-13 16:44:44.194785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.456  Copying: 12/36 [MB] (average 857 MBps) 00:27:13.456 00:27:13.456 16:44:44 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:27:13.456 16:44:44 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:27:13.456 16:44:44 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:27:13.456 16:44:44 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:27:13.456 16:44:44 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:13.456 16:44:44 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:27:13.456 16:44:44 -- dd/sparse.sh@102 -- # stat2_b=24576 00:27:13.456 16:44:44 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:27:13.456 16:44:44 -- dd/sparse.sh@103 -- # stat3_b=24576 00:27:13.456 16:44:44 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:13.456 00:27:13.456 real 0m0.940s 00:27:13.456 user 0m0.501s 00:27:13.456 sys 0m0.328s 00:27:13.456 16:44:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.456 16:44:44 -- common/autotest_common.sh@10 -- # set +x 00:27:13.456 ************************************ 00:27:13.456 END TEST dd_sparse_bdev_to_file 00:27:13.456 ************************************ 00:27:13.456 16:44:44 -- dd/sparse.sh@1 -- # cleanup 00:27:13.456 16:44:44 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:27:13.456 16:44:44 -- dd/sparse.sh@12 -- # rm file_zero1 00:27:13.456 16:44:44 -- dd/sparse.sh@13 -- # rm file_zero2 00:27:13.456 16:44:44 -- dd/sparse.sh@14 -- # rm file_zero3 00:27:13.716 00:27:13.716 real 0m3.254s 00:27:13.716 user 0m1.756s 00:27:13.716 sys 0m1.155s 00:27:13.716 16:44:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.716 16:44:44 -- common/autotest_common.sh@10 -- # set +x 00:27:13.716 ************************************ 00:27:13.716 END TEST spdk_dd_sparse 00:27:13.716 ************************************ 00:27:13.716 16:44:44 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:13.716 16:44:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:13.716 16:44:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:13.716 16:44:44 -- common/autotest_common.sh@10 -- # set +x 00:27:13.716 ************************************ 00:27:13.716 START TEST spdk_dd_negative 00:27:13.716 ************************************ 00:27:13.716 16:44:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:13.716 * Looking for test storage... 00:27:13.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:13.716 16:44:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:13.716 16:44:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.716 16:44:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.716 16:44:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.716 16:44:45 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:13.716 16:44:45 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:13.716 16:44:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:13.716 16:44:45 -- paths/export.sh@5 -- # export PATH 00:27:13.716 16:44:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:13.716 16:44:45 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:13.716 16:44:45 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:13.716 16:44:45 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:13.716 16:44:45 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:13.716 16:44:45 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:27:13.716 16:44:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:13.716 16:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:13.716 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:27:13.716 ************************************ 00:27:13.716 START TEST dd_invalid_arguments 00:27:13.716 ************************************ 00:27:13.716 16:44:45 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:27:13.716 16:44:45 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:13.716 16:44:45 -- common/autotest_common.sh@640 -- # local es=0 00:27:13.716 16:44:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:13.716 16:44:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.716 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.716 16:44:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.716 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.716 16:44:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.716 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.716 16:44:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.716 16:44:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:13.716 16:44:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:13.976 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:27:13.976 options: 00:27:13.976 -c, --config JSON config file (default none) 00:27:13.976 --json JSON config file (default none) 00:27:13.976 --json-ignore-init-errors 00:27:13.977 don't exit on invalid config entry 00:27:13.977 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:27:13.977 -g, --single-file-segments 00:27:13.977 force creating just one hugetlbfs file 00:27:13.977 -h, --help show this usage 00:27:13.977 -i, --shm-id shared memory ID (optional) 00:27:13.977 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:27:13.977 --lcores lcore to CPU mapping list. The list is in the format: 00:27:13.977 [<,lcores[@CPUs]>...] 00:27:13.977 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:27:13.977 Within the group, '-' is used for range separator, 00:27:13.977 ',' is used for single number separator. 00:27:13.977 '( )' can be omitted for single element group, 00:27:13.977 '@' can be omitted if cpus and lcores have the same value 00:27:13.977 -n, --mem-channels channel number of memory channels used for DPDK 00:27:13.977 -p, --main-core main (primary) core for DPDK 00:27:13.977 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:27:13.977 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:27:13.977 --disable-cpumask-locks Disable CPU core lock files. 00:27:13.977 --silence-noticelog disable notice level logging to stderr 00:27:13.977 --msg-mempool-size global message memory pool size in count (default: 262143) 00:27:13.977 -u, --no-pci disable PCI access 00:27:13.977 --wait-for-rpc wait for RPCs to initialize subsystems 00:27:13.977 --max-delay maximum reactor delay (in microseconds) 00:27:13.977 -B, --pci-blocked pci addr to block (can be used more than once) 00:27:13.977 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:27:13.977 -R, --huge-unlink unlink huge files after initialization 00:27:13.977 -v, --version print SPDK version 00:27:13.977 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:27:13.977 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:27:13.977 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:27:13.977 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:27:13.977 Tracepoints vary in size and can use more than one trace entry. 00:27:13.977 --rpcs-allowed comma-separated list of permitted RPCS 00:27:13.977 --env-context Opaque context for use of the env implementation 00:27:13.977 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:27:13.977 --no-huge run without using hugepages 00:27:13.977 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:27:13.977 -e, --tpoint-group [:] 00:27:13.977 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:27:13.977 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:27:13.977 Groups and /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:27:13.977 [2024-07-13 16:44:45.188101] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:27:13.977 masks can be combined (e.g. thread,bdev:0x1). 00:27:13.977 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:27:13.977 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:27:13.977 [--------- DD Options ---------] 00:27:13.977 --if Input file. Must specify either --if or --ib. 00:27:13.977 --ib Input bdev. Must specifier either --if or --ib 00:27:13.977 --of Output file. Must specify either --of or --ob. 00:27:13.977 --ob Output bdev. Must specify either --of or --ob. 00:27:13.977 --iflag Input file flags. 00:27:13.977 --oflag Output file flags. 00:27:13.977 --bs I/O unit size (default: 4096) 00:27:13.977 --qd Queue depth (default: 2) 00:27:13.977 --count I/O unit count. The number of I/O units to copy. (default: all) 00:27:13.977 --skip Skip this many I/O units at start of input. (default: 0) 00:27:13.977 --seek Skip this many I/O units at start of output. (default: 0) 00:27:13.977 --aio Force usage of AIO. (by default io_uring is used if available) 00:27:13.977 --sparse Enable hole skipping in input target 00:27:13.977 Available iflag and oflag values: 00:27:13.977 append - append mode 00:27:13.977 direct - use direct I/O for data 00:27:13.977 directory - fail unless a directory 00:27:13.977 dsync - use synchronized I/O for data 00:27:13.977 noatime - do not update access time 00:27:13.977 noctty - do not assign controlling terminal from file 00:27:13.977 nofollow - do not follow symlinks 00:27:13.977 nonblock - use non-blocking I/O 00:27:13.977 sync - use synchronized I/O for data and metadata 00:27:13.977 16:44:45 -- common/autotest_common.sh@643 -- # es=2 00:27:13.977 16:44:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:13.977 16:44:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:13.977 16:44:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:13.977 00:27:13.977 real 0m0.129s 00:27:13.977 user 0m0.055s 00:27:13.977 sys 0m0.075s 00:27:13.977 16:44:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.977 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:27:13.977 ************************************ 00:27:13.977 END TEST dd_invalid_arguments 00:27:13.977 ************************************ 00:27:13.977 16:44:45 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:27:13.977 16:44:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:13.977 16:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:13.977 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:27:13.977 ************************************ 00:27:13.977 START TEST dd_double_input 00:27:13.977 ************************************ 00:27:13.977 16:44:45 -- common/autotest_common.sh@1104 -- # double_input 00:27:13.977 16:44:45 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:13.977 16:44:45 -- common/autotest_common.sh@640 -- # local es=0 00:27:13.977 16:44:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:13.977 16:44:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.977 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.977 16:44:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.977 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.977 16:44:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.977 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.977 16:44:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.977 16:44:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:13.977 16:44:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:13.977 [2024-07-13 16:44:45.378157] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:27:13.977 16:44:45 -- common/autotest_common.sh@643 -- # es=22 00:27:13.977 16:44:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:13.977 16:44:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:13.977 16:44:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:13.977 00:27:13.977 real 0m0.125s 00:27:13.977 user 0m0.068s 00:27:13.977 sys 0m0.058s 00:27:13.977 16:44:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.977 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:27:13.977 ************************************ 00:27:13.977 END TEST dd_double_input 00:27:13.977 ************************************ 00:27:14.238 16:44:45 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:27:14.238 16:44:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:14.238 16:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:14.238 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:27:14.238 ************************************ 00:27:14.238 START TEST dd_double_output 00:27:14.238 ************************************ 00:27:14.238 16:44:45 -- common/autotest_common.sh@1104 -- # double_output 00:27:14.238 16:44:45 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:14.238 16:44:45 -- common/autotest_common.sh@640 -- # local es=0 00:27:14.238 16:44:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:14.238 16:44:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.238 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.238 16:44:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.238 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.238 16:44:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.238 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.238 16:44:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.238 16:44:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:14.238 16:44:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:14.238 [2024-07-13 16:44:45.556626] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:27:14.238 16:44:45 -- common/autotest_common.sh@643 -- # es=22 00:27:14.238 16:44:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:14.238 16:44:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:14.238 16:44:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:14.238 00:27:14.238 real 0m0.107s 00:27:14.238 user 0m0.046s 00:27:14.238 sys 0m0.062s 00:27:14.238 16:44:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.238 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:27:14.238 ************************************ 00:27:14.238 END TEST dd_double_output 00:27:14.238 ************************************ 00:27:14.238 16:44:45 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:27:14.238 16:44:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:14.238 16:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:14.238 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:27:14.238 ************************************ 00:27:14.238 START TEST dd_no_input 00:27:14.238 ************************************ 00:27:14.238 16:44:45 -- common/autotest_common.sh@1104 -- # no_input 00:27:14.238 16:44:45 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:14.238 16:44:45 -- common/autotest_common.sh@640 -- # local es=0 00:27:14.238 16:44:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:14.238 16:44:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.238 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.238 16:44:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.238 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.238 16:44:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.238 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.238 16:44:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.238 16:44:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:14.238 16:44:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:14.498 [2024-07-13 16:44:45.730942] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:27:14.498 16:44:45 -- common/autotest_common.sh@643 -- # es=22 00:27:14.498 16:44:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:14.498 16:44:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:14.498 16:44:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:14.498 00:27:14.498 real 0m0.111s 00:27:14.498 user 0m0.064s 00:27:14.498 sys 0m0.048s 00:27:14.498 16:44:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.498 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:27:14.498 ************************************ 00:27:14.498 END TEST dd_no_input 00:27:14.498 ************************************ 00:27:14.498 16:44:45 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:27:14.498 16:44:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:14.498 16:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:14.498 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:27:14.498 ************************************ 00:27:14.498 START TEST dd_no_output 00:27:14.498 ************************************ 00:27:14.498 16:44:45 -- common/autotest_common.sh@1104 -- # no_output 00:27:14.498 16:44:45 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:14.498 16:44:45 -- common/autotest_common.sh@640 -- # local es=0 00:27:14.498 16:44:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:14.498 16:44:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.498 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.498 16:44:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.498 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.498 16:44:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.498 16:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.498 16:44:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.498 16:44:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:14.498 16:44:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:14.498 [2024-07-13 16:44:45.907657] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:27:14.498 16:44:45 -- common/autotest_common.sh@643 -- # es=22 00:27:14.498 16:44:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:14.498 16:44:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:14.498 16:44:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:14.498 00:27:14.498 real 0m0.111s 00:27:14.498 user 0m0.076s 00:27:14.498 sys 0m0.035s 00:27:14.498 16:44:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.498 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:27:14.498 ************************************ 00:27:14.498 END TEST dd_no_output 00:27:14.498 ************************************ 00:27:14.757 16:44:46 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:27:14.757 16:44:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:14.757 16:44:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:14.757 16:44:46 -- common/autotest_common.sh@10 -- # set +x 00:27:14.757 ************************************ 00:27:14.757 START TEST dd_wrong_blocksize 00:27:14.757 ************************************ 00:27:14.757 16:44:46 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:27:14.757 16:44:46 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:14.757 16:44:46 -- common/autotest_common.sh@640 -- # local es=0 00:27:14.757 16:44:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:14.758 16:44:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.758 16:44:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.758 16:44:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.758 16:44:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.758 16:44:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.758 16:44:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.758 16:44:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.758 16:44:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:14.758 16:44:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:14.758 [2024-07-13 16:44:46.085603] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:27:14.758 16:44:46 -- common/autotest_common.sh@643 -- # es=22 00:27:14.758 16:44:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:14.758 16:44:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:14.758 16:44:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:14.758 00:27:14.758 real 0m0.109s 00:27:14.758 user 0m0.055s 00:27:14.758 sys 0m0.055s 00:27:14.758 16:44:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.758 ************************************ 00:27:14.758 END TEST dd_wrong_blocksize 00:27:14.758 ************************************ 00:27:14.758 16:44:46 -- common/autotest_common.sh@10 -- # set +x 00:27:14.758 16:44:46 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:27:14.758 16:44:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:14.758 16:44:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:14.758 16:44:46 -- common/autotest_common.sh@10 -- # set +x 00:27:14.758 ************************************ 00:27:14.758 START TEST dd_smaller_blocksize 00:27:14.758 ************************************ 00:27:14.758 16:44:46 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:27:14.758 16:44:46 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:14.758 16:44:46 -- common/autotest_common.sh@640 -- # local es=0 00:27:14.758 16:44:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:14.758 16:44:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.758 16:44:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.758 16:44:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.758 16:44:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.758 16:44:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.758 16:44:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.758 16:44:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.758 16:44:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:14.758 16:44:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:15.018 [2024-07-13 16:44:46.266984] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:15.018 [2024-07-13 16:44:46.267253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145936 ] 00:27:15.018 [2024-07-13 16:44:46.423237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.276 [2024-07-13 16:44:46.492580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.276 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:27:15.276 [2024-07-13 16:44:46.700692] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:27:15.276 [2024-07-13 16:44:46.700835] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:15.535 [2024-07-13 16:44:46.884598] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:15.794 16:44:47 -- common/autotest_common.sh@643 -- # es=244 00:27:15.794 16:44:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:15.794 16:44:47 -- common/autotest_common.sh@652 -- # es=116 00:27:15.794 16:44:47 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:15.794 16:44:47 -- common/autotest_common.sh@660 -- # es=1 00:27:15.794 16:44:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:15.794 00:27:15.794 real 0m0.891s 00:27:15.794 user 0m0.442s 00:27:15.794 sys 0m0.347s 00:27:15.794 16:44:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:15.794 16:44:47 -- common/autotest_common.sh@10 -- # set +x 00:27:15.794 ************************************ 00:27:15.794 END TEST dd_smaller_blocksize 00:27:15.794 ************************************ 00:27:15.794 16:44:47 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:27:15.794 16:44:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:15.794 16:44:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:15.794 16:44:47 -- common/autotest_common.sh@10 -- # set +x 00:27:15.794 ************************************ 00:27:15.794 START TEST dd_invalid_count 00:27:15.794 ************************************ 00:27:15.794 16:44:47 -- common/autotest_common.sh@1104 -- # invalid_count 00:27:15.794 16:44:47 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:15.794 16:44:47 -- common/autotest_common.sh@640 -- # local es=0 00:27:15.794 16:44:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:15.794 16:44:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:15.794 16:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:15.794 16:44:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:15.794 16:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:15.794 16:44:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:15.794 16:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:15.794 16:44:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:15.794 16:44:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:15.794 16:44:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:15.794 [2024-07-13 16:44:47.231851] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:27:16.054 16:44:47 -- common/autotest_common.sh@643 -- # es=22 00:27:16.054 16:44:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:16.054 16:44:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:16.054 16:44:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:16.054 00:27:16.054 real 0m0.119s 00:27:16.054 user 0m0.041s 00:27:16.054 sys 0m0.079s 00:27:16.054 16:44:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.054 ************************************ 00:27:16.054 END TEST dd_invalid_count 00:27:16.054 ************************************ 00:27:16.054 16:44:47 -- common/autotest_common.sh@10 -- # set +x 00:27:16.054 16:44:47 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:27:16.054 16:44:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:16.054 16:44:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:16.054 16:44:47 -- common/autotest_common.sh@10 -- # set +x 00:27:16.054 ************************************ 00:27:16.054 START TEST dd_invalid_oflag 00:27:16.054 ************************************ 00:27:16.054 16:44:47 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:27:16.054 16:44:47 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:16.054 16:44:47 -- common/autotest_common.sh@640 -- # local es=0 00:27:16.054 16:44:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:16.054 16:44:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.054 16:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:16.054 16:44:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.054 16:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:16.054 16:44:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.054 16:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:16.054 16:44:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.054 16:44:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:16.054 16:44:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:16.054 [2024-07-13 16:44:47.421076] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:27:16.054 16:44:47 -- common/autotest_common.sh@643 -- # es=22 00:27:16.054 16:44:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:16.054 16:44:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:16.054 16:44:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:16.054 00:27:16.054 real 0m0.125s 00:27:16.054 user 0m0.054s 00:27:16.054 sys 0m0.071s 00:27:16.054 16:44:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.054 16:44:47 -- common/autotest_common.sh@10 -- # set +x 00:27:16.054 ************************************ 00:27:16.054 END TEST dd_invalid_oflag 00:27:16.054 ************************************ 00:27:16.313 16:44:47 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:27:16.313 16:44:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:16.313 16:44:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:16.313 16:44:47 -- common/autotest_common.sh@10 -- # set +x 00:27:16.313 ************************************ 00:27:16.313 START TEST dd_invalid_iflag 00:27:16.313 ************************************ 00:27:16.313 16:44:47 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:27:16.313 16:44:47 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:16.313 16:44:47 -- common/autotest_common.sh@640 -- # local es=0 00:27:16.313 16:44:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:16.313 16:44:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.313 16:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:16.313 16:44:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.313 16:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:16.313 16:44:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.313 16:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:16.313 16:44:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.313 16:44:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:16.313 16:44:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:16.313 [2024-07-13 16:44:47.612197] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:27:16.313 16:44:47 -- common/autotest_common.sh@643 -- # es=22 00:27:16.313 16:44:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:16.313 16:44:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:16.313 16:44:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:16.313 00:27:16.313 real 0m0.120s 00:27:16.313 user 0m0.050s 00:27:16.313 sys 0m0.070s 00:27:16.313 16:44:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.313 ************************************ 00:27:16.313 16:44:47 -- common/autotest_common.sh@10 -- # set +x 00:27:16.313 END TEST dd_invalid_iflag 00:27:16.313 ************************************ 00:27:16.313 16:44:47 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:27:16.313 16:44:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:16.313 16:44:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:16.313 16:44:47 -- common/autotest_common.sh@10 -- # set +x 00:27:16.313 ************************************ 00:27:16.313 START TEST dd_unknown_flag 00:27:16.313 ************************************ 00:27:16.313 16:44:47 -- common/autotest_common.sh@1104 -- # unknown_flag 00:27:16.313 16:44:47 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:16.313 16:44:47 -- common/autotest_common.sh@640 -- # local es=0 00:27:16.313 16:44:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:16.313 16:44:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.313 16:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:16.313 16:44:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.313 16:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:16.313 16:44:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.313 16:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:16.313 16:44:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.313 16:44:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:16.313 16:44:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:16.572 [2024-07-13 16:44:47.796384] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:16.572 [2024-07-13 16:44:47.796652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146048 ] 00:27:16.572 [2024-07-13 16:44:47.951595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.572 [2024-07-13 16:44:48.024250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.831 [2024-07-13 16:44:48.143475] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:27:16.831 [2024-07-13 16:44:48.143592] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:16.831 [2024-07-13 16:44:48.143646] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:16.831 [2024-07-13 16:44:48.143711] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:17.090 [2024-07-13 16:44:48.325724] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:17.090 16:44:48 -- common/autotest_common.sh@643 -- # es=236 00:27:17.090 16:44:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:17.090 16:44:48 -- common/autotest_common.sh@652 -- # es=108 00:27:17.090 16:44:48 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:17.090 16:44:48 -- common/autotest_common.sh@660 -- # es=1 00:27:17.090 16:44:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:17.090 00:27:17.090 real 0m0.805s 00:27:17.090 user 0m0.435s 00:27:17.090 sys 0m0.271s 00:27:17.090 16:44:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.090 16:44:48 -- common/autotest_common.sh@10 -- # set +x 00:27:17.090 ************************************ 00:27:17.090 END TEST dd_unknown_flag 00:27:17.090 ************************************ 00:27:17.348 16:44:48 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:27:17.348 16:44:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:17.348 16:44:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:17.348 16:44:48 -- common/autotest_common.sh@10 -- # set +x 00:27:17.348 ************************************ 00:27:17.348 START TEST dd_invalid_json 00:27:17.348 ************************************ 00:27:17.348 16:44:48 -- common/autotest_common.sh@1104 -- # invalid_json 00:27:17.348 16:44:48 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:17.348 16:44:48 -- common/autotest_common.sh@640 -- # local es=0 00:27:17.348 16:44:48 -- dd/negative_dd.sh@95 -- # : 00:27:17.348 16:44:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:17.348 16:44:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.348 16:44:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:17.348 16:44:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.348 16:44:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:17.348 16:44:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.348 16:44:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:17.348 16:44:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.348 16:44:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:17.348 16:44:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:17.348 [2024-07-13 16:44:48.675177] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:17.348 [2024-07-13 16:44:48.675444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146082 ] 00:27:17.607 [2024-07-13 16:44:48.831924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.607 [2024-07-13 16:44:48.902443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.607 [2024-07-13 16:44:48.902691] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:27:17.607 [2024-07-13 16:44:48.902744] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:17.607 [2024-07-13 16:44:48.902821] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:17.866 16:44:49 -- common/autotest_common.sh@643 -- # es=234 00:27:17.866 16:44:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:17.866 16:44:49 -- common/autotest_common.sh@652 -- # es=106 00:27:17.866 16:44:49 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:17.866 16:44:49 -- common/autotest_common.sh@660 -- # es=1 00:27:17.866 16:44:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:17.866 00:27:17.866 real 0m0.494s 00:27:17.866 user 0m0.233s 00:27:17.866 sys 0m0.164s 00:27:17.866 16:44:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.866 16:44:49 -- common/autotest_common.sh@10 -- # set +x 00:27:17.866 ************************************ 00:27:17.866 END TEST dd_invalid_json 00:27:17.866 ************************************ 00:27:17.866 00:27:17.866 real 0m4.152s 00:27:17.866 user 0m2.030s 00:27:17.866 sys 0m1.825s 00:27:17.866 16:44:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.866 16:44:49 -- common/autotest_common.sh@10 -- # set +x 00:27:17.866 ************************************ 00:27:17.866 END TEST spdk_dd_negative 00:27:17.866 ************************************ 00:27:17.866 00:27:17.866 real 1m23.137s 00:27:17.866 user 0m46.399s 00:27:17.866 sys 0m26.248s 00:27:17.866 16:44:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.866 16:44:49 -- common/autotest_common.sh@10 -- # set +x 00:27:17.866 ************************************ 00:27:17.866 END TEST spdk_dd 00:27:17.866 ************************************ 00:27:17.866 16:44:49 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:27:17.866 16:44:49 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:17.866 16:44:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:17.866 16:44:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:17.866 16:44:49 -- common/autotest_common.sh@10 -- # set +x 00:27:17.866 ************************************ 00:27:17.866 START TEST blockdev_nvme 00:27:17.866 ************************************ 00:27:17.866 16:44:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:18.125 * Looking for test storage... 00:27:18.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:18.125 16:44:49 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:18.125 16:44:49 -- bdev/nbd_common.sh@6 -- # set -e 00:27:18.125 16:44:49 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:18.125 16:44:49 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:18.125 16:44:49 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:18.125 16:44:49 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:18.125 16:44:49 -- bdev/blockdev.sh@18 -- # : 00:27:18.125 16:44:49 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:27:18.125 16:44:49 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:27:18.125 16:44:49 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:27:18.125 16:44:49 -- bdev/blockdev.sh@672 -- # uname -s 00:27:18.125 16:44:49 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:27:18.125 16:44:49 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:27:18.125 16:44:49 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:27:18.125 16:44:49 -- bdev/blockdev.sh@681 -- # crypto_device= 00:27:18.125 16:44:49 -- bdev/blockdev.sh@682 -- # dek= 00:27:18.125 16:44:49 -- bdev/blockdev.sh@683 -- # env_ctx= 00:27:18.125 16:44:49 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:27:18.125 16:44:49 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:27:18.125 16:44:49 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:27:18.125 16:44:49 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:27:18.125 16:44:49 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:27:18.125 16:44:49 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=146176 00:27:18.125 16:44:49 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:18.125 16:44:49 -- bdev/blockdev.sh@47 -- # waitforlisten 146176 00:27:18.125 16:44:49 -- common/autotest_common.sh@819 -- # '[' -z 146176 ']' 00:27:18.125 16:44:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.125 16:44:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:18.125 16:44:49 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:18.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.125 16:44:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.125 16:44:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:18.125 16:44:49 -- common/autotest_common.sh@10 -- # set +x 00:27:18.125 [2024-07-13 16:44:49.462193] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:18.125 [2024-07-13 16:44:49.463265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146176 ] 00:27:18.383 [2024-07-13 16:44:49.619717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.383 [2024-07-13 16:44:49.691956] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:18.383 [2024-07-13 16:44:49.692214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.949 16:44:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:18.949 16:44:50 -- common/autotest_common.sh@852 -- # return 0 00:27:18.949 16:44:50 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:27:18.949 16:44:50 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:27:18.949 16:44:50 -- bdev/blockdev.sh@79 -- # local json 00:27:18.949 16:44:50 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:27:18.949 16:44:50 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:19.207 16:44:50 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:27:19.207 16:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.207 16:44:50 -- common/autotest_common.sh@10 -- # set +x 00:27:19.207 16:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.207 16:44:50 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:27:19.207 16:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.207 16:44:50 -- common/autotest_common.sh@10 -- # set +x 00:27:19.207 16:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.207 16:44:50 -- bdev/blockdev.sh@738 -- # cat 00:27:19.207 16:44:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:27:19.207 16:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.207 16:44:50 -- common/autotest_common.sh@10 -- # set +x 00:27:19.207 16:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.207 16:44:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:27:19.207 16:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.207 16:44:50 -- common/autotest_common.sh@10 -- # set +x 00:27:19.207 16:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.207 16:44:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:19.207 16:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.207 16:44:50 -- common/autotest_common.sh@10 -- # set +x 00:27:19.207 16:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.207 16:44:50 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:27:19.207 16:44:50 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:27:19.207 16:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.207 16:44:50 -- common/autotest_common.sh@10 -- # set +x 00:27:19.207 16:44:50 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:27:19.207 16:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.207 16:44:50 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:27:19.207 16:44:50 -- bdev/blockdev.sh@747 -- # jq -r .name 00:27:19.207 16:44:50 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "04216802-456b-4af8-af15-c7104e4126ac"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "04216802-456b-4af8-af15-c7104e4126ac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:27:19.207 16:44:50 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:27:19.207 16:44:50 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:27:19.207 16:44:50 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:27:19.207 16:44:50 -- bdev/blockdev.sh@752 -- # killprocess 146176 00:27:19.207 16:44:50 -- common/autotest_common.sh@926 -- # '[' -z 146176 ']' 00:27:19.207 16:44:50 -- common/autotest_common.sh@930 -- # kill -0 146176 00:27:19.207 16:44:50 -- common/autotest_common.sh@931 -- # uname 00:27:19.466 16:44:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:19.466 16:44:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146176 00:27:19.466 16:44:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:19.466 killing process with pid 146176 00:27:19.466 16:44:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:19.466 16:44:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146176' 00:27:19.466 16:44:50 -- common/autotest_common.sh@945 -- # kill 146176 00:27:19.466 16:44:50 -- common/autotest_common.sh@950 -- # wait 146176 00:27:20.033 16:44:51 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:20.033 16:44:51 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:27:20.033 16:44:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:27:20.033 16:44:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:20.033 16:44:51 -- common/autotest_common.sh@10 -- # set +x 00:27:20.033 ************************************ 00:27:20.033 START TEST bdev_hello_world 00:27:20.033 ************************************ 00:27:20.033 16:44:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:27:20.033 [2024-07-13 16:44:51.453312] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:20.033 [2024-07-13 16:44:51.453767] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146249 ] 00:27:20.291 [2024-07-13 16:44:51.610637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.291 [2024-07-13 16:44:51.677359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.549 [2024-07-13 16:44:51.918175] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:20.549 [2024-07-13 16:44:51.918277] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:27:20.549 [2024-07-13 16:44:51.918331] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:20.549 [2024-07-13 16:44:51.920974] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:20.549 [2024-07-13 16:44:51.921655] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:20.549 [2024-07-13 16:44:51.921709] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:20.549 [2024-07-13 16:44:51.922011] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:20.549 00:27:20.549 [2024-07-13 16:44:51.922058] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:21.115 00:27:21.115 real 0m0.925s 00:27:21.115 user 0m0.554s 00:27:21.115 sys 0m0.272s 00:27:21.115 16:44:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.115 16:44:52 -- common/autotest_common.sh@10 -- # set +x 00:27:21.115 ************************************ 00:27:21.115 END TEST bdev_hello_world 00:27:21.115 ************************************ 00:27:21.115 16:44:52 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:27:21.115 16:44:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:21.115 16:44:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:21.115 16:44:52 -- common/autotest_common.sh@10 -- # set +x 00:27:21.115 ************************************ 00:27:21.115 START TEST bdev_bounds 00:27:21.115 ************************************ 00:27:21.115 16:44:52 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:27:21.115 16:44:52 -- bdev/blockdev.sh@288 -- # bdevio_pid=146281 00:27:21.115 16:44:52 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:21.115 16:44:52 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:21.115 16:44:52 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 146281' 00:27:21.115 Process bdevio pid: 146281 00:27:21.115 16:44:52 -- bdev/blockdev.sh@291 -- # waitforlisten 146281 00:27:21.115 16:44:52 -- common/autotest_common.sh@819 -- # '[' -z 146281 ']' 00:27:21.115 16:44:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.115 16:44:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:21.115 16:44:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.115 16:44:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:21.115 16:44:52 -- common/autotest_common.sh@10 -- # set +x 00:27:21.115 [2024-07-13 16:44:52.435922] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:21.115 [2024-07-13 16:44:52.436355] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146281 ] 00:27:21.372 [2024-07-13 16:44:52.589631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:21.372 [2024-07-13 16:44:52.665371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.372 [2024-07-13 16:44:52.665553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.372 [2024-07-13 16:44:52.665554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.936 16:44:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:21.936 16:44:53 -- common/autotest_common.sh@852 -- # return 0 00:27:21.936 16:44:53 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:22.194 I/O targets: 00:27:22.194 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:27:22.194 00:27:22.194 00:27:22.194 CUnit - A unit testing framework for C - Version 2.1-3 00:27:22.194 http://cunit.sourceforge.net/ 00:27:22.194 00:27:22.194 00:27:22.194 Suite: bdevio tests on: Nvme0n1 00:27:22.194 Test: blockdev write read block ...passed 00:27:22.194 Test: blockdev write zeroes read block ...passed 00:27:22.194 Test: blockdev write zeroes read no split ...passed 00:27:22.194 Test: blockdev write zeroes read split ...passed 00:27:22.194 Test: blockdev write zeroes read split partial ...passed 00:27:22.194 Test: blockdev reset ...[2024-07-13 16:44:53.431881] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:22.194 [2024-07-13 16:44:53.434259] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:22.194 passed 00:27:22.194 Test: blockdev write read 8 blocks ...passed 00:27:22.194 Test: blockdev write read size > 128k ...passed 00:27:22.194 Test: blockdev write read invalid size ...passed 00:27:22.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:22.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:22.194 Test: blockdev write read max offset ...passed 00:27:22.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:22.194 Test: blockdev writev readv 8 blocks ...passed 00:27:22.194 Test: blockdev writev readv 30 x 1block ...passed 00:27:22.194 Test: blockdev writev readv block ...passed 00:27:22.194 Test: blockdev writev readv size > 128k ...passed 00:27:22.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:22.194 Test: blockdev comparev and writev ...[2024-07-13 16:44:53.441762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x3140d000 len:0x1000 00:27:22.194 [2024-07-13 16:44:53.441967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:22.194 passed 00:27:22.194 Test: blockdev nvme passthru rw ...passed 00:27:22.194 Test: blockdev nvme passthru vendor specific ...[2024-07-13 16:44:53.442942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:27:22.194 [2024-07-13 16:44:53.443086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:27:22.194 passed 00:27:22.194 Test: blockdev nvme admin passthru ...passed 00:27:22.194 Test: blockdev copy ...passed 00:27:22.194 00:27:22.194 Run Summary: Type Total Ran Passed Failed Inactive 00:27:22.194 suites 1 1 n/a 0 0 00:27:22.194 tests 23 23 23 0 0 00:27:22.194 asserts 152 152 152 0 n/a 00:27:22.194 00:27:22.194 Elapsed time = 0.087 seconds 00:27:22.194 0 00:27:22.194 16:44:53 -- bdev/blockdev.sh@293 -- # killprocess 146281 00:27:22.194 16:44:53 -- common/autotest_common.sh@926 -- # '[' -z 146281 ']' 00:27:22.194 16:44:53 -- common/autotest_common.sh@930 -- # kill -0 146281 00:27:22.194 16:44:53 -- common/autotest_common.sh@931 -- # uname 00:27:22.194 16:44:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:22.195 16:44:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146281 00:27:22.195 16:44:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:22.195 16:44:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:22.195 16:44:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146281' 00:27:22.195 killing process with pid 146281 00:27:22.195 16:44:53 -- common/autotest_common.sh@945 -- # kill 146281 00:27:22.195 16:44:53 -- common/autotest_common.sh@950 -- # wait 146281 00:27:22.453 16:44:53 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:27:22.453 00:27:22.453 real 0m1.482s 00:27:22.453 user 0m3.507s 00:27:22.453 sys 0m0.383s 00:27:22.453 16:44:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:22.453 16:44:53 -- common/autotest_common.sh@10 -- # set +x 00:27:22.453 ************************************ 00:27:22.453 END TEST bdev_bounds 00:27:22.453 ************************************ 00:27:22.453 16:44:53 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:27:22.453 16:44:53 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:27:22.453 16:44:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:22.453 16:44:53 -- common/autotest_common.sh@10 -- # set +x 00:27:22.746 ************************************ 00:27:22.746 START TEST bdev_nbd 00:27:22.746 ************************************ 00:27:22.746 16:44:53 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:27:22.746 16:44:53 -- bdev/blockdev.sh@298 -- # uname -s 00:27:22.746 16:44:53 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:27:22.746 16:44:53 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:22.746 16:44:53 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:22.746 16:44:53 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:27:22.746 16:44:53 -- bdev/blockdev.sh@302 -- # local bdev_all 00:27:22.746 16:44:53 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:27:22.746 16:44:53 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:27:22.746 16:44:53 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:22.746 16:44:53 -- bdev/blockdev.sh@309 -- # local nbd_all 00:27:22.746 16:44:53 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:27:22.746 16:44:53 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:27:22.746 16:44:53 -- bdev/blockdev.sh@312 -- # local nbd_list 00:27:22.746 16:44:53 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:27:22.746 16:44:53 -- bdev/blockdev.sh@313 -- # local bdev_list 00:27:22.746 16:44:53 -- bdev/blockdev.sh@316 -- # nbd_pid=146337 00:27:22.746 16:44:53 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:22.746 16:44:53 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:22.746 16:44:53 -- bdev/blockdev.sh@318 -- # waitforlisten 146337 /var/tmp/spdk-nbd.sock 00:27:22.746 16:44:53 -- common/autotest_common.sh@819 -- # '[' -z 146337 ']' 00:27:22.746 16:44:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:22.746 16:44:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:22.746 16:44:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:22.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:22.746 16:44:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:22.746 16:44:53 -- common/autotest_common.sh@10 -- # set +x 00:27:22.746 [2024-07-13 16:44:54.011963] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:22.746 [2024-07-13 16:44:54.012223] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.746 [2024-07-13 16:44:54.166082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.003 [2024-07-13 16:44:54.243543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.590 16:44:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:23.590 16:44:54 -- common/autotest_common.sh@852 -- # return 0 00:27:23.590 16:44:54 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:27:23.590 16:44:54 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:23.590 16:44:54 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:27:23.590 16:44:54 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:23.590 16:44:54 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:27:23.590 16:44:54 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:23.590 16:44:54 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:27:23.590 16:44:54 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:23.590 16:44:54 -- bdev/nbd_common.sh@24 -- # local i 00:27:23.590 16:44:54 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:23.590 16:44:54 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:23.590 16:44:54 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:23.590 16:44:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:27:23.910 16:44:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:23.910 16:44:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:23.910 16:44:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:23.910 16:44:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:23.910 16:44:55 -- common/autotest_common.sh@857 -- # local i 00:27:23.910 16:44:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:23.910 16:44:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:23.910 16:44:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:23.910 16:44:55 -- common/autotest_common.sh@861 -- # break 00:27:23.910 16:44:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:23.910 16:44:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:23.910 16:44:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:23.910 1+0 records in 00:27:23.910 1+0 records out 00:27:23.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000819245 s, 5.0 MB/s 00:27:23.910 16:44:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:23.910 16:44:55 -- common/autotest_common.sh@874 -- # size=4096 00:27:23.910 16:44:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:23.910 16:44:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:23.910 16:44:55 -- common/autotest_common.sh@877 -- # return 0 00:27:23.910 16:44:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:23.910 16:44:55 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:23.910 16:44:55 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:23.910 16:44:55 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:23.910 { 00:27:23.910 "nbd_device": "/dev/nbd0", 00:27:23.910 "bdev_name": "Nvme0n1" 00:27:23.910 } 00:27:23.910 ]' 00:27:23.910 16:44:55 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:23.910 16:44:55 -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:23.910 { 00:27:23.910 "nbd_device": "/dev/nbd0", 00:27:23.910 "bdev_name": "Nvme0n1" 00:27:23.910 } 00:27:23.910 ]' 00:27:23.910 16:44:55 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@51 -- # local i 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@41 -- # break 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@45 -- # return 0 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:24.168 16:44:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:24.426 16:44:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:24.426 16:44:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:24.426 16:44:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:24.426 16:44:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:24.426 16:44:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:24.426 16:44:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@65 -- # true 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@65 -- # count=0 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@122 -- # count=0 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@127 -- # return 0 00:27:24.687 16:44:55 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@12 -- # local i 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:24.687 16:44:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:27:24.687 /dev/nbd0 00:27:24.687 16:44:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:24.687 16:44:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:24.687 16:44:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:24.687 16:44:56 -- common/autotest_common.sh@857 -- # local i 00:27:24.687 16:44:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:24.687 16:44:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:24.687 16:44:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:24.687 16:44:56 -- common/autotest_common.sh@861 -- # break 00:27:24.687 16:44:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:24.687 16:44:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:24.688 16:44:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:24.688 1+0 records in 00:27:24.688 1+0 records out 00:27:24.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587122 s, 7.0 MB/s 00:27:24.688 16:44:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:24.688 16:44:56 -- common/autotest_common.sh@874 -- # size=4096 00:27:24.688 16:44:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:24.688 16:44:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:24.688 16:44:56 -- common/autotest_common.sh@877 -- # return 0 00:27:24.688 16:44:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:24.688 16:44:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:24.688 16:44:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:24.688 16:44:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:24.688 16:44:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:24.946 16:44:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:24.946 { 00:27:24.946 "nbd_device": "/dev/nbd0", 00:27:24.946 "bdev_name": "Nvme0n1" 00:27:24.946 } 00:27:24.946 ]' 00:27:24.946 16:44:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:24.946 { 00:27:24.946 "nbd_device": "/dev/nbd0", 00:27:24.946 "bdev_name": "Nvme0n1" 00:27:24.946 } 00:27:24.946 ]' 00:27:24.946 16:44:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@65 -- # count=1 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@66 -- # echo 1 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@95 -- # count=1 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:25.206 256+0 records in 00:27:25.206 256+0 records out 00:27:25.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00841726 s, 125 MB/s 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:25.206 256+0 records in 00:27:25.206 256+0 records out 00:27:25.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0539451 s, 19.4 MB/s 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@51 -- # local i 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:25.206 16:44:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:25.465 16:44:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:25.465 16:44:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:25.465 16:44:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:25.465 16:44:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:25.465 16:44:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:25.465 16:44:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:25.465 16:44:56 -- bdev/nbd_common.sh@41 -- # break 00:27:25.465 16:44:56 -- bdev/nbd_common.sh@45 -- # return 0 00:27:25.465 16:44:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:25.465 16:44:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:25.465 16:44:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:25.725 16:44:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:25.725 16:44:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:25.725 16:44:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@65 -- # true 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@65 -- # count=0 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@104 -- # count=0 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@109 -- # return 0 00:27:25.725 16:44:57 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:25.725 16:44:57 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:25.984 malloc_lvol_verify 00:27:25.984 16:44:57 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:25.984 3d8f5405-6491-443d-af70-dd9640c530a7 00:27:25.984 16:44:57 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:26.243 bf5536ff-7fbc-4a5d-8e2a-19632ca71d92 00:27:26.243 16:44:57 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:26.503 /dev/nbd0 00:27:26.503 16:44:57 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:26.503 mke2fs 1.46.5 (30-Dec-2021) 00:27:26.503 00:27:26.503 Filesystem too small for a journal 00:27:26.503 Discarding device blocks: 0/1024 done 00:27:26.503 Creating filesystem with 1024 4k blocks and 1024 inodes 00:27:26.503 00:27:26.503 Allocating group tables: 0/1 done 00:27:26.503 Writing inode tables: 0/1 done 00:27:26.503 Writing superblocks and filesystem accounting information: 0/1 done 00:27:26.503 00:27:26.503 16:44:57 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:26.503 16:44:57 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:26.503 16:44:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:26.503 16:44:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:26.503 16:44:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:26.503 16:44:57 -- bdev/nbd_common.sh@51 -- # local i 00:27:26.503 16:44:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:26.503 16:44:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:26.762 16:44:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:26.762 16:44:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:26.762 16:44:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:26.762 16:44:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:26.762 16:44:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:26.762 16:44:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:26.762 16:44:58 -- bdev/nbd_common.sh@41 -- # break 00:27:26.762 16:44:58 -- bdev/nbd_common.sh@45 -- # return 0 00:27:26.762 16:44:58 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:27:26.762 16:44:58 -- bdev/nbd_common.sh@147 -- # return 0 00:27:26.762 16:44:58 -- bdev/blockdev.sh@324 -- # killprocess 146337 00:27:26.762 16:44:58 -- common/autotest_common.sh@926 -- # '[' -z 146337 ']' 00:27:26.762 16:44:58 -- common/autotest_common.sh@930 -- # kill -0 146337 00:27:26.762 16:44:58 -- common/autotest_common.sh@931 -- # uname 00:27:26.762 16:44:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:26.762 16:44:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146337 00:27:26.762 16:44:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:26.762 16:44:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:26.762 killing process with pid 146337 00:27:26.762 16:44:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146337' 00:27:26.762 16:44:58 -- common/autotest_common.sh@945 -- # kill 146337 00:27:26.762 16:44:58 -- common/autotest_common.sh@950 -- # wait 146337 00:27:27.331 16:44:58 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:27:27.331 00:27:27.331 real 0m4.696s 00:27:27.331 user 0m6.537s 00:27:27.331 sys 0m1.586s 00:27:27.331 16:44:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.331 16:44:58 -- common/autotest_common.sh@10 -- # set +x 00:27:27.331 ************************************ 00:27:27.331 END TEST bdev_nbd 00:27:27.331 ************************************ 00:27:27.331 16:44:58 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:27:27.331 16:44:58 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:27:27.331 skipping fio tests on NVMe due to multi-ns failures. 00:27:27.331 16:44:58 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:27:27.331 16:44:58 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:27.331 16:44:58 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:27.331 16:44:58 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:27:27.331 16:44:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:27.331 16:44:58 -- common/autotest_common.sh@10 -- # set +x 00:27:27.331 ************************************ 00:27:27.331 START TEST bdev_verify 00:27:27.331 ************************************ 00:27:27.331 16:44:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:27.331 [2024-07-13 16:44:58.782344] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:27.331 [2024-07-13 16:44:58.783304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146521 ] 00:27:27.590 [2024-07-13 16:44:58.943245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:27.590 [2024-07-13 16:44:59.019314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.590 [2024-07-13 16:44:59.019315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.849 Running I/O for 5 seconds... 00:27:33.120 00:27:33.120 Latency(us) 00:27:33.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.120 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:33.120 Verification LBA range: start 0x0 length 0xa0000 00:27:33.120 Nvme0n1 : 5.01 16544.34 64.63 0.00 0.00 7704.85 423.25 21595.67 00:27:33.120 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:33.120 Verification LBA range: start 0xa0000 length 0xa0000 00:27:33.120 Nvme0n1 : 5.01 17742.85 69.31 0.00 0.00 7183.22 429.10 18474.91 00:27:33.120 =================================================================================================================== 00:27:33.120 Total : 34287.19 133.93 0.00 0.00 7434.99 423.25 21595.67 00:27:39.683 00:27:39.683 real 0m11.287s 00:27:39.683 user 0m21.557s 00:27:39.683 sys 0m0.378s 00:27:39.683 ************************************ 00:27:39.683 END TEST bdev_verify 00:27:39.683 ************************************ 00:27:39.683 16:45:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.683 16:45:09 -- common/autotest_common.sh@10 -- # set +x 00:27:39.683 16:45:10 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:39.683 16:45:10 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:27:39.683 16:45:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:39.683 16:45:10 -- common/autotest_common.sh@10 -- # set +x 00:27:39.683 ************************************ 00:27:39.683 START TEST bdev_verify_big_io 00:27:39.683 ************************************ 00:27:39.683 16:45:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:39.683 [2024-07-13 16:45:10.136062] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:39.683 [2024-07-13 16:45:10.136359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146650 ] 00:27:39.683 [2024-07-13 16:45:10.294220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:39.683 [2024-07-13 16:45:10.367638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.683 [2024-07-13 16:45:10.367638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.683 Running I/O for 5 seconds... 00:27:44.982 00:27:44.982 Latency(us) 00:27:44.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.982 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:44.982 Verification LBA range: start 0x0 length 0xa000 00:27:44.982 Nvme0n1 : 5.07 1186.98 74.19 0.00 0.00 106053.54 475.92 207717.91 00:27:44.982 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:44.982 Verification LBA range: start 0xa000 length 0xa000 00:27:44.982 Nvme0n1 : 5.07 1345.51 84.09 0.00 0.00 93464.58 542.23 152792.50 00:27:44.982 =================================================================================================================== 00:27:44.982 Total : 2532.49 158.28 0.00 0.00 99367.01 475.92 207717.91 00:27:44.982 00:27:44.982 real 0m6.272s 00:27:44.982 user 0m11.627s 00:27:44.982 sys 0m0.290s 00:27:44.982 16:45:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.982 16:45:16 -- common/autotest_common.sh@10 -- # set +x 00:27:44.982 ************************************ 00:27:44.982 END TEST bdev_verify_big_io 00:27:44.982 ************************************ 00:27:44.982 16:45:16 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:44.982 16:45:16 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:27:44.983 16:45:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:44.983 16:45:16 -- common/autotest_common.sh@10 -- # set +x 00:27:44.983 ************************************ 00:27:44.983 START TEST bdev_write_zeroes 00:27:44.983 ************************************ 00:27:44.983 16:45:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:45.242 [2024-07-13 16:45:16.460143] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:45.242 [2024-07-13 16:45:16.460392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146746 ] 00:27:45.242 [2024-07-13 16:45:16.614408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.242 [2024-07-13 16:45:16.683841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.500 Running I/O for 1 seconds... 00:27:46.872 00:27:46.872 Latency(us) 00:27:46.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.872 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:46.872 Nvme0n1 : 1.00 63814.08 249.27 0.00 0.00 2001.35 760.69 13419.28 00:27:46.872 =================================================================================================================== 00:27:46.872 Total : 63814.08 249.27 0.00 0.00 2001.35 760.69 13419.28 00:27:47.131 00:27:47.132 real 0m1.950s 00:27:47.132 user 0m1.585s 00:27:47.132 sys 0m0.265s 00:27:47.132 16:45:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.132 16:45:18 -- common/autotest_common.sh@10 -- # set +x 00:27:47.132 ************************************ 00:27:47.132 END TEST bdev_write_zeroes 00:27:47.132 ************************************ 00:27:47.132 16:45:18 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:47.132 16:45:18 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:27:47.132 16:45:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:47.132 16:45:18 -- common/autotest_common.sh@10 -- # set +x 00:27:47.132 ************************************ 00:27:47.132 START TEST bdev_json_nonenclosed 00:27:47.132 ************************************ 00:27:47.132 16:45:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:47.132 [2024-07-13 16:45:18.489362] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:47.132 [2024-07-13 16:45:18.489645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146790 ] 00:27:47.390 [2024-07-13 16:45:18.646570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.390 [2024-07-13 16:45:18.731073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.390 [2024-07-13 16:45:18.731329] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:47.390 [2024-07-13 16:45:18.731375] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:47.649 00:27:47.649 real 0m0.508s 00:27:47.649 user 0m0.254s 00:27:47.649 sys 0m0.154s 00:27:47.649 16:45:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.649 16:45:18 -- common/autotest_common.sh@10 -- # set +x 00:27:47.649 ************************************ 00:27:47.649 END TEST bdev_json_nonenclosed 00:27:47.649 ************************************ 00:27:47.649 16:45:18 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:47.649 16:45:18 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:27:47.649 16:45:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:47.649 16:45:18 -- common/autotest_common.sh@10 -- # set +x 00:27:47.649 ************************************ 00:27:47.649 START TEST bdev_json_nonarray 00:27:47.649 ************************************ 00:27:47.649 16:45:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:47.649 [2024-07-13 16:45:19.057806] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:47.649 [2024-07-13 16:45:19.058265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146829 ] 00:27:47.907 [2024-07-13 16:45:19.214187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.907 [2024-07-13 16:45:19.285351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.907 [2024-07-13 16:45:19.285578] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:47.907 [2024-07-13 16:45:19.285632] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:48.166 00:27:48.166 real 0m0.492s 00:27:48.166 user 0m0.226s 00:27:48.166 sys 0m0.165s 00:27:48.166 16:45:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:48.166 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:27:48.166 ************************************ 00:27:48.166 END TEST bdev_json_nonarray 00:27:48.166 ************************************ 00:27:48.166 16:45:19 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:27:48.166 16:45:19 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:27:48.166 16:45:19 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:27:48.166 16:45:19 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:27:48.166 16:45:19 -- bdev/blockdev.sh@809 -- # cleanup 00:27:48.166 16:45:19 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:48.166 16:45:19 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:48.166 16:45:19 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:27:48.166 16:45:19 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:27:48.166 16:45:19 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:27:48.166 16:45:19 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:27:48.166 00:27:48.166 real 0m30.291s 00:27:48.166 user 0m48.140s 00:27:48.166 sys 0m4.522s 00:27:48.166 16:45:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:48.166 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:27:48.166 ************************************ 00:27:48.166 END TEST blockdev_nvme 00:27:48.166 ************************************ 00:27:48.166 16:45:19 -- spdk/autotest.sh@219 -- # uname -s 00:27:48.166 16:45:19 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:27:48.166 16:45:19 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:27:48.166 16:45:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:48.166 16:45:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:48.166 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:27:48.166 ************************************ 00:27:48.166 START TEST blockdev_nvme_gpt 00:27:48.166 ************************************ 00:27:48.166 16:45:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:27:48.426 * Looking for test storage... 00:27:48.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:48.426 16:45:19 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:48.426 16:45:19 -- bdev/nbd_common.sh@6 -- # set -e 00:27:48.426 16:45:19 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:48.426 16:45:19 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:48.426 16:45:19 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:48.426 16:45:19 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:48.426 16:45:19 -- bdev/blockdev.sh@18 -- # : 00:27:48.426 16:45:19 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:27:48.426 16:45:19 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:27:48.426 16:45:19 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:27:48.426 16:45:19 -- bdev/blockdev.sh@672 -- # uname -s 00:27:48.426 16:45:19 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:27:48.426 16:45:19 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:27:48.426 16:45:19 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:27:48.426 16:45:19 -- bdev/blockdev.sh@681 -- # crypto_device= 00:27:48.426 16:45:19 -- bdev/blockdev.sh@682 -- # dek= 00:27:48.426 16:45:19 -- bdev/blockdev.sh@683 -- # env_ctx= 00:27:48.426 16:45:19 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:27:48.426 16:45:19 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:27:48.426 16:45:19 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:27:48.426 16:45:19 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:27:48.426 16:45:19 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:27:48.426 16:45:19 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=146897 00:27:48.426 16:45:19 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:48.426 16:45:19 -- bdev/blockdev.sh@47 -- # waitforlisten 146897 00:27:48.426 16:45:19 -- common/autotest_common.sh@819 -- # '[' -z 146897 ']' 00:27:48.426 16:45:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.426 16:45:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:48.426 16:45:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.426 16:45:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:48.426 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:27:48.426 16:45:19 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:48.426 [2024-07-13 16:45:19.832509] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:48.426 [2024-07-13 16:45:19.832762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146897 ] 00:27:48.685 [2024-07-13 16:45:19.987132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.685 [2024-07-13 16:45:20.111560] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:48.685 [2024-07-13 16:45:20.111936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.620 16:45:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:49.620 16:45:20 -- common/autotest_common.sh@852 -- # return 0 00:27:49.620 16:45:20 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:27:49.620 16:45:20 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:27:49.620 16:45:20 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:49.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:49.878 Waiting for block devices as requested 00:27:49.878 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:50.137 16:45:21 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:27:50.137 16:45:21 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:27:50.137 16:45:21 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:27:50.137 16:45:21 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:27:50.137 16:45:21 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:27:50.137 16:45:21 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:27:50.137 16:45:21 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:27:50.137 16:45:21 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:50.137 16:45:21 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:27:50.137 16:45:21 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:27:50.137 16:45:21 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:27:50.137 16:45:21 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:27:50.137 16:45:21 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:27:50.137 16:45:21 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:27:50.137 16:45:21 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:27:50.137 16:45:21 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:27:50.137 16:45:21 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:27:50.137 BYT; 00:27:50.137 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:27:50.137 16:45:21 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:27:50.137 BYT; 00:27:50.137 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:27:50.137 16:45:21 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:27:50.137 16:45:21 -- bdev/blockdev.sh@114 -- # break 00:27:50.137 16:45:21 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:27:50.137 16:45:21 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:27:50.137 16:45:21 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:27:50.137 16:45:21 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:27:50.396 16:45:21 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:27:50.396 16:45:21 -- scripts/common.sh@410 -- # local spdk_guid 00:27:50.396 16:45:21 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:27:50.396 16:45:21 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:27:50.396 16:45:21 -- scripts/common.sh@415 -- # IFS='()' 00:27:50.396 16:45:21 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:27:50.396 16:45:21 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:27:50.396 16:45:21 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:27:50.396 16:45:21 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:27:50.396 16:45:21 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:27:50.396 16:45:21 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:27:50.396 16:45:21 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:27:50.396 16:45:21 -- scripts/common.sh@422 -- # local spdk_guid 00:27:50.396 16:45:21 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:27:50.396 16:45:21 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:27:50.396 16:45:21 -- scripts/common.sh@427 -- # IFS='()' 00:27:50.396 16:45:21 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:27:50.396 16:45:21 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:27:50.396 16:45:21 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:27:50.396 16:45:21 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:27:50.396 16:45:21 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:27:50.396 16:45:21 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:27:50.396 16:45:21 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:27:51.773 The operation has completed successfully. 00:27:51.773 16:45:22 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:27:52.707 The operation has completed successfully. 00:27:52.707 16:45:23 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:52.965 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:52.965 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.901 16:45:25 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:27:53.901 16:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.901 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:27:53.901 [] 00:27:53.901 16:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:53.901 16:45:25 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:27:53.901 16:45:25 -- bdev/blockdev.sh@79 -- # local json 00:27:53.901 16:45:25 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:27:53.901 16:45:25 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:53.901 16:45:25 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:27:53.901 16:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.901 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:27:53.901 16:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:53.901 16:45:25 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:27:53.901 16:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.901 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:27:53.901 16:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:53.901 16:45:25 -- bdev/blockdev.sh@738 -- # cat 00:27:53.901 16:45:25 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:27:53.901 16:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.901 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:27:54.160 16:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.160 16:45:25 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:27:54.160 16:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.160 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:27:54.160 16:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.160 16:45:25 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:54.160 16:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.160 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:27:54.160 16:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.160 16:45:25 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:27:54.160 16:45:25 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:27:54.160 16:45:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.160 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:27:54.160 16:45:25 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:27:54.160 16:45:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.160 16:45:25 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:27:54.160 16:45:25 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:27:54.160 16:45:25 -- bdev/blockdev.sh@747 -- # jq -r .name 00:27:54.160 16:45:25 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:27:54.160 16:45:25 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:27:54.160 16:45:25 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:27:54.161 16:45:25 -- bdev/blockdev.sh@752 -- # killprocess 146897 00:27:54.161 16:45:25 -- common/autotest_common.sh@926 -- # '[' -z 146897 ']' 00:27:54.161 16:45:25 -- common/autotest_common.sh@930 -- # kill -0 146897 00:27:54.161 16:45:25 -- common/autotest_common.sh@931 -- # uname 00:27:54.161 16:45:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:54.161 16:45:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146897 00:27:54.161 16:45:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:54.161 killing process with pid 146897 00:27:54.161 16:45:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:54.161 16:45:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146897' 00:27:54.161 16:45:25 -- common/autotest_common.sh@945 -- # kill 146897 00:27:54.161 16:45:25 -- common/autotest_common.sh@950 -- # wait 146897 00:27:55.098 16:45:26 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:55.098 16:45:26 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:27:55.098 16:45:26 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:27:55.098 16:45:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:55.098 16:45:26 -- common/autotest_common.sh@10 -- # set +x 00:27:55.098 ************************************ 00:27:55.098 START TEST bdev_hello_world 00:27:55.098 ************************************ 00:27:55.098 16:45:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:27:55.098 [2024-07-13 16:45:26.316078] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:55.098 [2024-07-13 16:45:26.316388] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147319 ] 00:27:55.098 [2024-07-13 16:45:26.471621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.098 [2024-07-13 16:45:26.551145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.357 [2024-07-13 16:45:26.799554] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:55.357 [2024-07-13 16:45:26.799627] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:27:55.357 [2024-07-13 16:45:26.799693] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:55.357 [2024-07-13 16:45:26.802364] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:55.357 [2024-07-13 16:45:26.802889] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:55.357 [2024-07-13 16:45:26.802929] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:55.357 [2024-07-13 16:45:26.803252] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:55.357 00:27:55.357 [2024-07-13 16:45:26.803296] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:55.924 00:27:55.924 real 0m0.962s 00:27:55.924 user 0m0.541s 00:27:55.924 sys 0m0.322s 00:27:55.924 16:45:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.924 ************************************ 00:27:55.924 END TEST bdev_hello_world 00:27:55.924 ************************************ 00:27:55.924 16:45:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.924 16:45:27 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:27:55.924 16:45:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:55.924 16:45:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:55.924 16:45:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.924 ************************************ 00:27:55.924 START TEST bdev_bounds 00:27:55.924 ************************************ 00:27:55.924 16:45:27 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:27:55.924 16:45:27 -- bdev/blockdev.sh@288 -- # bdevio_pid=147359 00:27:55.925 16:45:27 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:55.925 16:45:27 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:55.925 Process bdevio pid: 147359 00:27:55.925 16:45:27 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 147359' 00:27:55.925 16:45:27 -- bdev/blockdev.sh@291 -- # waitforlisten 147359 00:27:55.925 16:45:27 -- common/autotest_common.sh@819 -- # '[' -z 147359 ']' 00:27:55.925 16:45:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.925 16:45:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:55.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.925 16:45:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.925 16:45:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:55.925 16:45:27 -- common/autotest_common.sh@10 -- # set +x 00:27:55.925 [2024-07-13 16:45:27.345414] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:55.925 [2024-07-13 16:45:27.345605] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147359 ] 00:27:56.183 [2024-07-13 16:45:27.497834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:56.183 [2024-07-13 16:45:27.573726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.183 [2024-07-13 16:45:27.573924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.183 [2024-07-13 16:45:27.573900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.122 16:45:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:57.122 16:45:28 -- common/autotest_common.sh@852 -- # return 0 00:27:57.122 16:45:28 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:57.122 I/O targets: 00:27:57.122 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:27:57.122 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:27:57.122 00:27:57.122 00:27:57.122 CUnit - A unit testing framework for C - Version 2.1-3 00:27:57.122 http://cunit.sourceforge.net/ 00:27:57.122 00:27:57.122 00:27:57.122 Suite: bdevio tests on: Nvme0n1p2 00:27:57.122 Test: blockdev write read block ...passed 00:27:57.122 Test: blockdev write zeroes read block ...passed 00:27:57.122 Test: blockdev write zeroes read no split ...passed 00:27:57.122 Test: blockdev write zeroes read split ...passed 00:27:57.122 Test: blockdev write zeroes read split partial ...passed 00:27:57.122 Test: blockdev reset ...[2024-07-13 16:45:28.385226] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:57.122 passed 00:27:57.122 Test: blockdev write read 8 blocks ...[2024-07-13 16:45:28.387606] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:57.122 passed 00:27:57.122 Test: blockdev write read size > 128k ...passed 00:27:57.122 Test: blockdev write read invalid size ...passed 00:27:57.122 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:57.122 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:57.122 Test: blockdev write read max offset ...passed 00:27:57.122 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:57.122 Test: blockdev writev readv 8 blocks ...passed 00:27:57.122 Test: blockdev writev readv 30 x 1block ...passed 00:27:57.122 Test: blockdev writev readv block ...passed 00:27:57.122 Test: blockdev writev readv size > 128k ...passed 00:27:57.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:57.122 Test: blockdev comparev and writev ...[2024-07-13 16:45:28.394098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x6980b000 len:0x1000 00:27:57.122 passed 00:27:57.122 Test: blockdev nvme passthru rw ...passed 00:27:57.122 Test: blockdev nvme passthru vendor specific ...passed 00:27:57.122 Test: blockdev nvme admin passthru ...passed 00:27:57.122 Test: blockdev copy ...[2024-07-13 16:45:28.394175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:57.122 passed 00:27:57.122 Suite: bdevio tests on: Nvme0n1p1 00:27:57.122 Test: blockdev write read block ...passed 00:27:57.122 Test: blockdev write zeroes read block ...passed 00:27:57.122 Test: blockdev write zeroes read no split ...passed 00:27:57.122 Test: blockdev write zeroes read split ...passed 00:27:57.122 Test: blockdev write zeroes read split partial ...passed 00:27:57.122 Test: blockdev reset ...[2024-07-13 16:45:28.408776] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:57.122 passed 00:27:57.122 Test: blockdev write read 8 blocks ...[2024-07-13 16:45:28.411038] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:57.122 passed 00:27:57.122 Test: blockdev write read size > 128k ...passed 00:27:57.122 Test: blockdev write read invalid size ...passed 00:27:57.122 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:57.122 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:57.122 Test: blockdev write read max offset ...passed 00:27:57.122 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:57.122 Test: blockdev writev readv 8 blocks ...passed 00:27:57.122 Test: blockdev writev readv 30 x 1block ...passed 00:27:57.122 Test: blockdev writev readv block ...passed 00:27:57.122 Test: blockdev writev readv size > 128k ...passed 00:27:57.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:57.122 Test: blockdev comparev and writev ...[2024-07-13 16:45:28.417866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x6980d000 len:0x1000 00:27:57.122 passed 00:27:57.122 Test: blockdev nvme passthru rw ...passed 00:27:57.122 Test: blockdev nvme passthru vendor specific ...passed 00:27:57.122 Test: blockdev nvme admin passthru ...passed 00:27:57.122 Test: blockdev copy ...[2024-07-13 16:45:28.417928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:57.122 passed 00:27:57.122 00:27:57.122 Run Summary: Type Total Ran Passed Failed Inactive 00:27:57.122 suites 2 2 n/a 0 0 00:27:57.122 tests 46 46 46 0 0 00:27:57.122 asserts 284 284 284 0 n/a 00:27:57.122 00:27:57.122 Elapsed time = 0.108 seconds 00:27:57.122 0 00:27:57.122 16:45:28 -- bdev/blockdev.sh@293 -- # killprocess 147359 00:27:57.122 16:45:28 -- common/autotest_common.sh@926 -- # '[' -z 147359 ']' 00:27:57.122 16:45:28 -- common/autotest_common.sh@930 -- # kill -0 147359 00:27:57.122 16:45:28 -- common/autotest_common.sh@931 -- # uname 00:27:57.122 16:45:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:57.122 16:45:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 147359 00:27:57.122 16:45:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:57.122 16:45:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:57.122 killing process with pid 147359 00:27:57.122 16:45:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 147359' 00:27:57.122 16:45:28 -- common/autotest_common.sh@945 -- # kill 147359 00:27:57.122 16:45:28 -- common/autotest_common.sh@950 -- # wait 147359 00:27:57.383 16:45:28 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:27:57.383 00:27:57.383 real 0m1.533s 00:27:57.383 user 0m3.635s 00:27:57.383 sys 0m0.444s 00:27:57.383 16:45:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.383 16:45:28 -- common/autotest_common.sh@10 -- # set +x 00:27:57.383 ************************************ 00:27:57.383 END TEST bdev_bounds 00:27:57.383 ************************************ 00:27:57.642 16:45:28 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:27:57.642 16:45:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:27:57.642 16:45:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:57.642 16:45:28 -- common/autotest_common.sh@10 -- # set +x 00:27:57.642 ************************************ 00:27:57.642 START TEST bdev_nbd 00:27:57.642 ************************************ 00:27:57.642 16:45:28 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:27:57.642 16:45:28 -- bdev/blockdev.sh@298 -- # uname -s 00:27:57.642 16:45:28 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:27:57.642 16:45:28 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:57.642 16:45:28 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:57.642 16:45:28 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:27:57.642 16:45:28 -- bdev/blockdev.sh@302 -- # local bdev_all 00:27:57.642 16:45:28 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:27:57.642 16:45:28 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:27:57.642 16:45:28 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:57.642 16:45:28 -- bdev/blockdev.sh@309 -- # local nbd_all 00:27:57.642 16:45:28 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:27:57.642 16:45:28 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:57.642 16:45:28 -- bdev/blockdev.sh@312 -- # local nbd_list 00:27:57.642 16:45:28 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:27:57.642 16:45:28 -- bdev/blockdev.sh@313 -- # local bdev_list 00:27:57.642 16:45:28 -- bdev/blockdev.sh@316 -- # nbd_pid=147409 00:27:57.642 16:45:28 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:57.642 16:45:28 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:57.642 16:45:28 -- bdev/blockdev.sh@318 -- # waitforlisten 147409 /var/tmp/spdk-nbd.sock 00:27:57.642 16:45:28 -- common/autotest_common.sh@819 -- # '[' -z 147409 ']' 00:27:57.642 16:45:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:57.642 16:45:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:57.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:57.642 16:45:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:57.642 16:45:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:57.642 16:45:28 -- common/autotest_common.sh@10 -- # set +x 00:27:57.642 [2024-07-13 16:45:28.957034] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:57.642 [2024-07-13 16:45:28.957213] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.642 [2024-07-13 16:45:29.102852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.901 [2024-07-13 16:45:29.175917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.469 16:45:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:58.469 16:45:29 -- common/autotest_common.sh@852 -- # return 0 00:27:58.469 16:45:29 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:27:58.469 16:45:29 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:58.469 16:45:29 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:27:58.469 16:45:29 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:58.469 16:45:29 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:27:58.469 16:45:29 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:58.469 16:45:29 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:27:58.469 16:45:29 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:58.469 16:45:29 -- bdev/nbd_common.sh@24 -- # local i 00:27:58.469 16:45:29 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:58.469 16:45:29 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:58.469 16:45:29 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:58.469 16:45:29 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:27:58.728 16:45:30 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:58.728 16:45:30 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:58.728 16:45:30 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:58.728 16:45:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:58.728 16:45:30 -- common/autotest_common.sh@857 -- # local i 00:27:58.728 16:45:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:58.728 16:45:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:58.728 16:45:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:58.728 16:45:30 -- common/autotest_common.sh@861 -- # break 00:27:58.728 16:45:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:58.728 16:45:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:58.728 16:45:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:58.728 1+0 records in 00:27:58.728 1+0 records out 00:27:58.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848787 s, 4.8 MB/s 00:27:58.728 16:45:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:58.728 16:45:30 -- common/autotest_common.sh@874 -- # size=4096 00:27:58.728 16:45:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:58.728 16:45:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:58.728 16:45:30 -- common/autotest_common.sh@877 -- # return 0 00:27:58.728 16:45:30 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:58.728 16:45:30 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:58.728 16:45:30 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:27:58.987 16:45:30 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:27:58.987 16:45:30 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:27:58.987 16:45:30 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:27:58.987 16:45:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:27:58.987 16:45:30 -- common/autotest_common.sh@857 -- # local i 00:27:58.987 16:45:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:58.987 16:45:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:58.987 16:45:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:27:58.987 16:45:30 -- common/autotest_common.sh@861 -- # break 00:27:58.987 16:45:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:58.987 16:45:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:58.987 16:45:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:58.987 1+0 records in 00:27:58.987 1+0 records out 00:27:58.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513081 s, 8.0 MB/s 00:27:58.987 16:45:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:58.987 16:45:30 -- common/autotest_common.sh@874 -- # size=4096 00:27:58.987 16:45:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:58.987 16:45:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:58.987 16:45:30 -- common/autotest_common.sh@877 -- # return 0 00:27:58.987 16:45:30 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:58.987 16:45:30 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:58.987 16:45:30 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:59.246 16:45:30 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:59.246 { 00:27:59.246 "nbd_device": "/dev/nbd0", 00:27:59.246 "bdev_name": "Nvme0n1p1" 00:27:59.246 }, 00:27:59.246 { 00:27:59.246 "nbd_device": "/dev/nbd1", 00:27:59.246 "bdev_name": "Nvme0n1p2" 00:27:59.246 } 00:27:59.246 ]' 00:27:59.246 16:45:30 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:59.246 16:45:30 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:59.246 16:45:30 -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:59.246 { 00:27:59.246 "nbd_device": "/dev/nbd0", 00:27:59.246 "bdev_name": "Nvme0n1p1" 00:27:59.246 }, 00:27:59.246 { 00:27:59.246 "nbd_device": "/dev/nbd1", 00:27:59.246 "bdev_name": "Nvme0n1p2" 00:27:59.246 } 00:27:59.246 ]' 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@51 -- # local i 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@41 -- # break 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@45 -- # return 0 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:59.505 16:45:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:59.764 16:45:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:59.764 16:45:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:59.764 16:45:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:59.764 16:45:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:59.764 16:45:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:59.764 16:45:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:59.764 16:45:31 -- bdev/nbd_common.sh@41 -- # break 00:27:59.764 16:45:31 -- bdev/nbd_common.sh@45 -- # return 0 00:27:59.764 16:45:31 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:59.764 16:45:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:59.764 16:45:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@65 -- # true 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@65 -- # count=0 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@122 -- # count=0 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@127 -- # return 0 00:28:00.023 16:45:31 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@12 -- # local i 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:00.023 16:45:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:28:00.282 /dev/nbd0 00:28:00.282 16:45:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:00.282 16:45:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:00.282 16:45:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:00.282 16:45:31 -- common/autotest_common.sh@857 -- # local i 00:28:00.282 16:45:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:00.282 16:45:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:00.282 16:45:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:00.282 16:45:31 -- common/autotest_common.sh@861 -- # break 00:28:00.282 16:45:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:00.282 16:45:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:00.282 16:45:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:00.282 1+0 records in 00:28:00.282 1+0 records out 00:28:00.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643689 s, 6.4 MB/s 00:28:00.282 16:45:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.282 16:45:31 -- common/autotest_common.sh@874 -- # size=4096 00:28:00.282 16:45:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.282 16:45:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:00.282 16:45:31 -- common/autotest_common.sh@877 -- # return 0 00:28:00.282 16:45:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:00.282 16:45:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:00.282 16:45:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:28:00.541 /dev/nbd1 00:28:00.541 16:45:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:00.541 16:45:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:00.541 16:45:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:28:00.541 16:45:31 -- common/autotest_common.sh@857 -- # local i 00:28:00.541 16:45:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:00.541 16:45:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:00.541 16:45:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:28:00.541 16:45:31 -- common/autotest_common.sh@861 -- # break 00:28:00.541 16:45:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:00.541 16:45:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:00.541 16:45:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:00.541 1+0 records in 00:28:00.541 1+0 records out 00:28:00.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604854 s, 6.8 MB/s 00:28:00.541 16:45:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.541 16:45:31 -- common/autotest_common.sh@874 -- # size=4096 00:28:00.541 16:45:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.541 16:45:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:00.541 16:45:31 -- common/autotest_common.sh@877 -- # return 0 00:28:00.541 16:45:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:00.541 16:45:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:00.541 16:45:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:00.541 16:45:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:00.541 16:45:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:00.799 16:45:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:00.799 { 00:28:00.799 "nbd_device": "/dev/nbd0", 00:28:00.799 "bdev_name": "Nvme0n1p1" 00:28:00.799 }, 00:28:00.799 { 00:28:00.799 "nbd_device": "/dev/nbd1", 00:28:00.799 "bdev_name": "Nvme0n1p2" 00:28:00.799 } 00:28:00.799 ]' 00:28:00.799 16:45:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:00.799 { 00:28:00.799 "nbd_device": "/dev/nbd0", 00:28:00.799 "bdev_name": "Nvme0n1p1" 00:28:00.799 }, 00:28:00.799 { 00:28:00.799 "nbd_device": "/dev/nbd1", 00:28:00.799 "bdev_name": "Nvme0n1p2" 00:28:00.799 } 00:28:00.799 ]' 00:28:00.799 16:45:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:00.799 16:45:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:00.799 /dev/nbd1' 00:28:00.799 16:45:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:00.799 /dev/nbd1' 00:28:00.799 16:45:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:00.799 16:45:32 -- bdev/nbd_common.sh@65 -- # count=2 00:28:00.799 16:45:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:28:00.799 16:45:32 -- bdev/nbd_common.sh@95 -- # count=2 00:28:00.800 16:45:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:00.800 16:45:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:00.800 16:45:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:00.800 16:45:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:00.800 16:45:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:00.800 16:45:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:00.800 16:45:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:00.800 16:45:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:00.800 256+0 records in 00:28:00.800 256+0 records out 00:28:00.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100014 s, 105 MB/s 00:28:00.800 16:45:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:00.800 16:45:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:00.800 256+0 records in 00:28:00.800 256+0 records out 00:28:00.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0781247 s, 13.4 MB/s 00:28:00.800 16:45:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:00.800 16:45:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:01.059 256+0 records in 00:28:01.059 256+0 records out 00:28:01.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0793943 s, 13.2 MB/s 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@51 -- # local i 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@41 -- # break 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@45 -- # return 0 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:01.059 16:45:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:01.319 16:45:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:01.319 16:45:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:01.319 16:45:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:01.319 16:45:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:01.319 16:45:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:01.319 16:45:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:01.319 16:45:32 -- bdev/nbd_common.sh@41 -- # break 00:28:01.319 16:45:32 -- bdev/nbd_common.sh@45 -- # return 0 00:28:01.319 16:45:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:01.319 16:45:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:01.319 16:45:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:01.577 16:45:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:01.577 16:45:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:01.577 16:45:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:01.577 16:45:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:01.577 16:45:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:01.577 16:45:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:01.577 16:45:32 -- bdev/nbd_common.sh@65 -- # true 00:28:01.578 16:45:32 -- bdev/nbd_common.sh@65 -- # count=0 00:28:01.578 16:45:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:01.578 16:45:32 -- bdev/nbd_common.sh@104 -- # count=0 00:28:01.578 16:45:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:01.578 16:45:32 -- bdev/nbd_common.sh@109 -- # return 0 00:28:01.578 16:45:32 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:01.578 16:45:32 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:01.578 16:45:32 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:01.578 16:45:32 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:28:01.578 16:45:32 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:28:01.578 16:45:32 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:01.836 malloc_lvol_verify 00:28:01.836 16:45:33 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:02.094 7738d67b-baec-4213-9dda-4f39afcd45e1 00:28:02.094 16:45:33 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:02.094 7158be01-e11b-466f-84fe-e51fed9537ba 00:28:02.095 16:45:33 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:02.352 /dev/nbd0 00:28:02.352 16:45:33 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:28:02.352 mke2fs 1.46.5 (30-Dec-2021) 00:28:02.352 00:28:02.352 Filesystem too small for a journal 00:28:02.353 Discarding device blocks: 0/1024 done 00:28:02.353 Creating filesystem with 1024 4k blocks and 1024 inodes 00:28:02.353 00:28:02.353 Allocating group tables: 0/1 done 00:28:02.353 Writing inode tables: 0/1 done 00:28:02.353 Writing superblocks and filesystem accounting information: 0/1 done 00:28:02.353 00:28:02.353 16:45:33 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:28:02.353 16:45:33 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:02.353 16:45:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:02.353 16:45:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:02.353 16:45:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:02.353 16:45:33 -- bdev/nbd_common.sh@51 -- # local i 00:28:02.353 16:45:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:02.353 16:45:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:02.611 16:45:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:02.611 16:45:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:02.611 16:45:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:02.611 16:45:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:02.611 16:45:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:02.611 16:45:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:02.611 16:45:33 -- bdev/nbd_common.sh@41 -- # break 00:28:02.611 16:45:33 -- bdev/nbd_common.sh@45 -- # return 0 00:28:02.611 16:45:33 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:28:02.611 16:45:33 -- bdev/nbd_common.sh@147 -- # return 0 00:28:02.611 16:45:33 -- bdev/blockdev.sh@324 -- # killprocess 147409 00:28:02.611 16:45:33 -- common/autotest_common.sh@926 -- # '[' -z 147409 ']' 00:28:02.611 16:45:33 -- common/autotest_common.sh@930 -- # kill -0 147409 00:28:02.611 16:45:33 -- common/autotest_common.sh@931 -- # uname 00:28:02.611 16:45:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:02.611 16:45:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 147409 00:28:02.611 16:45:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:02.611 killing process with pid 147409 00:28:02.611 16:45:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:02.611 16:45:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 147409' 00:28:02.611 16:45:33 -- common/autotest_common.sh@945 -- # kill 147409 00:28:02.611 16:45:33 -- common/autotest_common.sh@950 -- # wait 147409 00:28:03.176 16:45:34 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:28:03.176 00:28:03.176 real 0m5.501s 00:28:03.176 user 0m7.767s 00:28:03.176 sys 0m1.793s 00:28:03.176 16:45:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:03.176 16:45:34 -- common/autotest_common.sh@10 -- # set +x 00:28:03.176 ************************************ 00:28:03.176 END TEST bdev_nbd 00:28:03.176 ************************************ 00:28:03.176 16:45:34 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:28:03.176 16:45:34 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:28:03.176 16:45:34 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:28:03.176 skipping fio tests on NVMe due to multi-ns failures. 00:28:03.176 16:45:34 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:28:03.176 16:45:34 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:03.176 16:45:34 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:03.176 16:45:34 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:03.176 16:45:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:03.176 16:45:34 -- common/autotest_common.sh@10 -- # set +x 00:28:03.176 ************************************ 00:28:03.176 START TEST bdev_verify 00:28:03.176 ************************************ 00:28:03.176 16:45:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:03.176 [2024-07-13 16:45:34.523653] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:03.176 [2024-07-13 16:45:34.523820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147653 ] 00:28:03.435 [2024-07-13 16:45:34.667922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:03.435 [2024-07-13 16:45:34.741760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.435 [2024-07-13 16:45:34.741767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.693 Running I/O for 5 seconds... 00:28:08.970 00:28:08.970 Latency(us) 00:28:08.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.970 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:08.970 Verification LBA range: start 0x0 length 0x4ff80 00:28:08.970 Nvme0n1p1 : 5.02 4832.17 18.88 0.00 0.00 26413.01 2964.72 27088.21 00:28:08.970 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:08.970 Verification LBA range: start 0x4ff80 length 0x4ff80 00:28:08.970 Nvme0n1p1 : 5.02 6265.25 24.47 0.00 0.00 20380.89 2153.33 27587.54 00:28:08.970 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:08.970 Verification LBA range: start 0x0 length 0x4ff7f 00:28:08.970 Nvme0n1p2 : 5.03 4828.25 18.86 0.00 0.00 26390.76 4993.22 26339.23 00:28:08.970 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:08.970 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:28:08.970 Nvme0n1p2 : 5.03 6260.03 24.45 0.00 0.00 20371.76 4525.10 26339.23 00:28:08.970 =================================================================================================================== 00:28:08.970 Total : 22185.70 86.66 0.00 0.00 23000.13 2153.33 27587.54 00:28:12.262 ************************************ 00:28:12.262 END TEST bdev_verify 00:28:12.262 ************************************ 00:28:12.262 00:28:12.262 real 0m9.197s 00:28:12.262 user 0m17.433s 00:28:12.262 sys 0m0.375s 00:28:12.262 16:45:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:12.262 16:45:43 -- common/autotest_common.sh@10 -- # set +x 00:28:12.262 16:45:43 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:12.262 16:45:43 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:12.262 16:45:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:12.262 16:45:43 -- common/autotest_common.sh@10 -- # set +x 00:28:12.521 ************************************ 00:28:12.521 START TEST bdev_verify_big_io 00:28:12.521 ************************************ 00:28:12.521 16:45:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:12.521 [2024-07-13 16:45:43.810892] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:12.521 [2024-07-13 16:45:43.811235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147759 ] 00:28:12.521 [2024-07-13 16:45:43.972542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:12.779 [2024-07-13 16:45:44.058508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.779 [2024-07-13 16:45:44.058518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.036 Running I/O for 5 seconds... 00:28:18.307 00:28:18.307 Latency(us) 00:28:18.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.307 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:18.307 Verification LBA range: start 0x0 length 0x4ff8 00:28:18.307 Nvme0n1p1 : 5.15 629.81 39.36 0.00 0.00 199272.77 33953.89 331549.74 00:28:18.307 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:18.307 Verification LBA range: start 0x4ff8 length 0x4ff8 00:28:18.307 Nvme0n1p1 : 5.16 672.16 42.01 0.00 0.00 187875.38 2917.91 307582.29 00:28:18.307 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:18.307 Verification LBA range: start 0x0 length 0x4ff7 00:28:18.307 Nvme0n1p2 : 5.15 645.82 40.36 0.00 0.00 191652.26 1388.74 239674.51 00:28:18.307 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:18.307 Verification LBA range: start 0x4ff7 length 0x4ff7 00:28:18.307 Nvme0n1p2 : 5.16 671.89 41.99 0.00 0.00 184216.76 2933.52 235679.94 00:28:18.307 =================================================================================================================== 00:28:18.307 Total : 2619.68 163.73 0.00 0.00 190603.23 1388.74 331549.74 00:28:18.875 ************************************ 00:28:18.876 END TEST bdev_verify_big_io 00:28:18.876 ************************************ 00:28:18.876 00:28:18.876 real 0m6.349s 00:28:18.876 user 0m11.789s 00:28:18.876 sys 0m0.285s 00:28:18.876 16:45:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:18.876 16:45:50 -- common/autotest_common.sh@10 -- # set +x 00:28:18.876 16:45:50 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:18.876 16:45:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:18.876 16:45:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:18.876 16:45:50 -- common/autotest_common.sh@10 -- # set +x 00:28:18.876 ************************************ 00:28:18.876 START TEST bdev_write_zeroes 00:28:18.876 ************************************ 00:28:18.876 16:45:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:18.876 [2024-07-13 16:45:50.224738] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:18.876 [2024-07-13 16:45:50.224937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147861 ] 00:28:19.135 [2024-07-13 16:45:50.365063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.135 [2024-07-13 16:45:50.435391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.394 Running I/O for 1 seconds... 00:28:20.328 00:28:20.328 Latency(us) 00:28:20.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.328 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:20.328 Nvme0n1p1 : 1.01 27792.67 108.57 0.00 0.00 4596.84 2418.59 15166.90 00:28:20.328 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:20.328 Nvme0n1p2 : 1.01 27721.48 108.29 0.00 0.00 4602.64 2262.55 13294.45 00:28:20.328 =================================================================================================================== 00:28:20.328 Total : 55514.15 216.85 0.00 0.00 4599.74 2262.55 15166.90 00:28:20.897 00:28:20.897 real 0m1.935s 00:28:20.898 user 0m1.567s 00:28:20.898 sys 0m0.269s 00:28:20.898 16:45:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.898 16:45:52 -- common/autotest_common.sh@10 -- # set +x 00:28:20.898 ************************************ 00:28:20.898 END TEST bdev_write_zeroes 00:28:20.898 ************************************ 00:28:20.898 16:45:52 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:20.898 16:45:52 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:20.898 16:45:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:20.898 16:45:52 -- common/autotest_common.sh@10 -- # set +x 00:28:20.898 ************************************ 00:28:20.898 START TEST bdev_json_nonenclosed 00:28:20.898 ************************************ 00:28:20.898 16:45:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:20.898 [2024-07-13 16:45:52.229812] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:20.898 [2024-07-13 16:45:52.230007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147905 ] 00:28:21.157 [2024-07-13 16:45:52.372837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.157 [2024-07-13 16:45:52.444471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.157 [2024-07-13 16:45:52.444671] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:21.157 [2024-07-13 16:45:52.444721] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:21.416 00:28:21.416 real 0m0.462s 00:28:21.417 user 0m0.232s 00:28:21.417 sys 0m0.130s 00:28:21.417 16:45:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:21.417 16:45:52 -- common/autotest_common.sh@10 -- # set +x 00:28:21.417 ************************************ 00:28:21.417 END TEST bdev_json_nonenclosed 00:28:21.417 ************************************ 00:28:21.417 16:45:52 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:21.417 16:45:52 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:21.417 16:45:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:21.417 16:45:52 -- common/autotest_common.sh@10 -- # set +x 00:28:21.417 ************************************ 00:28:21.417 START TEST bdev_json_nonarray 00:28:21.417 ************************************ 00:28:21.417 16:45:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:21.417 [2024-07-13 16:45:52.768719] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:21.417 [2024-07-13 16:45:52.768978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147944 ] 00:28:21.675 [2024-07-13 16:45:52.924433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.675 [2024-07-13 16:45:53.008150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.675 [2024-07-13 16:45:53.008412] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:21.675 [2024-07-13 16:45:53.008457] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:21.932 00:28:21.932 real 0m0.495s 00:28:21.932 user 0m0.239s 00:28:21.932 sys 0m0.156s 00:28:21.932 16:45:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:21.932 16:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:21.932 ************************************ 00:28:21.932 END TEST bdev_json_nonarray 00:28:21.932 ************************************ 00:28:21.932 16:45:53 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:28:21.932 16:45:53 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:28:21.932 16:45:53 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:28:21.932 16:45:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:21.932 16:45:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:21.932 16:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:21.932 ************************************ 00:28:21.932 START TEST bdev_gpt_uuid 00:28:21.932 ************************************ 00:28:21.932 16:45:53 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:28:21.932 16:45:53 -- bdev/blockdev.sh@612 -- # local bdev 00:28:21.932 16:45:53 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:28:21.932 16:45:53 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=147967 00:28:21.932 16:45:53 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:21.932 16:45:53 -- bdev/blockdev.sh@47 -- # waitforlisten 147967 00:28:21.932 16:45:53 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:21.932 16:45:53 -- common/autotest_common.sh@819 -- # '[' -z 147967 ']' 00:28:21.932 16:45:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.932 16:45:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:21.932 16:45:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.932 16:45:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:21.932 16:45:53 -- common/autotest_common.sh@10 -- # set +x 00:28:21.932 [2024-07-13 16:45:53.346696] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:21.932 [2024-07-13 16:45:53.347046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147967 ] 00:28:22.190 [2024-07-13 16:45:53.502755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.190 [2024-07-13 16:45:53.593452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:22.190 [2024-07-13 16:45:53.593680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.125 16:45:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:23.125 16:45:54 -- common/autotest_common.sh@852 -- # return 0 00:28:23.125 16:45:54 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:23.125 16:45:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.125 16:45:54 -- common/autotest_common.sh@10 -- # set +x 00:28:23.125 Some configs were skipped because the RPC state that can call them passed over. 00:28:23.125 16:45:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.125 16:45:54 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:28:23.125 16:45:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.125 16:45:54 -- common/autotest_common.sh@10 -- # set +x 00:28:23.125 16:45:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.125 16:45:54 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:28:23.125 16:45:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.125 16:45:54 -- common/autotest_common.sh@10 -- # set +x 00:28:23.125 16:45:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.125 16:45:54 -- bdev/blockdev.sh@619 -- # bdev='[ 00:28:23.125 { 00:28:23.125 "name": "Nvme0n1p1", 00:28:23.125 "aliases": [ 00:28:23.125 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:28:23.125 ], 00:28:23.125 "product_name": "GPT Disk", 00:28:23.125 "block_size": 4096, 00:28:23.125 "num_blocks": 655104, 00:28:23.125 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:28:23.125 "assigned_rate_limits": { 00:28:23.125 "rw_ios_per_sec": 0, 00:28:23.125 "rw_mbytes_per_sec": 0, 00:28:23.125 "r_mbytes_per_sec": 0, 00:28:23.125 "w_mbytes_per_sec": 0 00:28:23.125 }, 00:28:23.125 "claimed": false, 00:28:23.125 "zoned": false, 00:28:23.125 "supported_io_types": { 00:28:23.125 "read": true, 00:28:23.125 "write": true, 00:28:23.125 "unmap": true, 00:28:23.125 "write_zeroes": true, 00:28:23.125 "flush": true, 00:28:23.125 "reset": true, 00:28:23.125 "compare": true, 00:28:23.125 "compare_and_write": false, 00:28:23.125 "abort": true, 00:28:23.125 "nvme_admin": false, 00:28:23.125 "nvme_io": false 00:28:23.125 }, 00:28:23.125 "driver_specific": { 00:28:23.125 "gpt": { 00:28:23.125 "base_bdev": "Nvme0n1", 00:28:23.125 "offset_blocks": 256, 00:28:23.125 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:28:23.125 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:28:23.125 "partition_name": "SPDK_TEST_first" 00:28:23.125 } 00:28:23.125 } 00:28:23.125 } 00:28:23.125 ]' 00:28:23.125 16:45:54 -- bdev/blockdev.sh@620 -- # jq -r length 00:28:23.125 16:45:54 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:28:23.125 16:45:54 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:28:23.125 16:45:54 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:28:23.125 16:45:54 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:28:23.125 16:45:54 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:28:23.125 16:45:54 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:28:23.126 16:45:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.126 16:45:54 -- common/autotest_common.sh@10 -- # set +x 00:28:23.126 16:45:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.126 16:45:54 -- bdev/blockdev.sh@624 -- # bdev='[ 00:28:23.126 { 00:28:23.126 "name": "Nvme0n1p2", 00:28:23.126 "aliases": [ 00:28:23.126 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:28:23.126 ], 00:28:23.126 "product_name": "GPT Disk", 00:28:23.126 "block_size": 4096, 00:28:23.126 "num_blocks": 655103, 00:28:23.126 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:28:23.126 "assigned_rate_limits": { 00:28:23.126 "rw_ios_per_sec": 0, 00:28:23.126 "rw_mbytes_per_sec": 0, 00:28:23.126 "r_mbytes_per_sec": 0, 00:28:23.126 "w_mbytes_per_sec": 0 00:28:23.126 }, 00:28:23.126 "claimed": false, 00:28:23.126 "zoned": false, 00:28:23.126 "supported_io_types": { 00:28:23.126 "read": true, 00:28:23.126 "write": true, 00:28:23.126 "unmap": true, 00:28:23.126 "write_zeroes": true, 00:28:23.126 "flush": true, 00:28:23.126 "reset": true, 00:28:23.126 "compare": true, 00:28:23.126 "compare_and_write": false, 00:28:23.126 "abort": true, 00:28:23.126 "nvme_admin": false, 00:28:23.126 "nvme_io": false 00:28:23.126 }, 00:28:23.126 "driver_specific": { 00:28:23.126 "gpt": { 00:28:23.126 "base_bdev": "Nvme0n1", 00:28:23.126 "offset_blocks": 655360, 00:28:23.126 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:28:23.126 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:28:23.126 "partition_name": "SPDK_TEST_second" 00:28:23.126 } 00:28:23.126 } 00:28:23.126 } 00:28:23.126 ]' 00:28:23.126 16:45:54 -- bdev/blockdev.sh@625 -- # jq -r length 00:28:23.385 16:45:54 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:28:23.385 16:45:54 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:28:23.385 16:45:54 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:28:23.385 16:45:54 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:28:23.385 16:45:54 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:28:23.385 16:45:54 -- bdev/blockdev.sh@629 -- # killprocess 147967 00:28:23.385 16:45:54 -- common/autotest_common.sh@926 -- # '[' -z 147967 ']' 00:28:23.385 16:45:54 -- common/autotest_common.sh@930 -- # kill -0 147967 00:28:23.385 16:45:54 -- common/autotest_common.sh@931 -- # uname 00:28:23.385 16:45:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:23.385 16:45:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 147967 00:28:23.385 16:45:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:23.385 killing process with pid 147967 00:28:23.385 16:45:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:23.385 16:45:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 147967' 00:28:23.385 16:45:54 -- common/autotest_common.sh@945 -- # kill 147967 00:28:23.385 16:45:54 -- common/autotest_common.sh@950 -- # wait 147967 00:28:23.954 00:28:23.954 real 0m2.146s 00:28:23.954 user 0m2.271s 00:28:23.954 sys 0m0.567s 00:28:23.954 16:45:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:23.954 16:45:55 -- common/autotest_common.sh@10 -- # set +x 00:28:23.954 ************************************ 00:28:23.954 END TEST bdev_gpt_uuid 00:28:23.954 ************************************ 00:28:24.213 16:45:55 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:28:24.213 16:45:55 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:28:24.213 16:45:55 -- bdev/blockdev.sh@809 -- # cleanup 00:28:24.213 16:45:55 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:28:24.213 16:45:55 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:24.213 16:45:55 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:28:24.213 16:45:55 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:28:24.213 16:45:55 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:28:24.213 16:45:55 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:24.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:24.472 Waiting for block devices as requested 00:28:24.731 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:24.731 16:45:56 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:28:24.731 16:45:56 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:28:24.731 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:28:24.731 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:28:24.731 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:28:24.731 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:28:24.731 16:45:56 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:28:24.731 00:28:24.731 real 0m36.523s 00:28:24.731 user 0m53.075s 00:28:24.731 sys 0m7.510s 00:28:24.731 16:45:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.731 ************************************ 00:28:24.731 END TEST blockdev_nvme_gpt 00:28:24.731 ************************************ 00:28:24.731 16:45:56 -- common/autotest_common.sh@10 -- # set +x 00:28:24.731 16:45:56 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:28:24.731 16:45:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:24.731 16:45:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:24.731 16:45:56 -- common/autotest_common.sh@10 -- # set +x 00:28:24.991 ************************************ 00:28:24.991 START TEST nvme 00:28:24.991 ************************************ 00:28:24.991 16:45:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:28:24.991 * Looking for test storage... 00:28:24.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:24.991 16:45:56 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:25.558 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:25.558 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:28:26.936 16:45:58 -- nvme/nvme.sh@79 -- # uname 00:28:26.936 16:45:58 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:28:26.936 16:45:58 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:28:26.936 16:45:58 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:28:26.936 16:45:58 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:28:26.936 16:45:58 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:28:26.936 16:45:58 -- common/autotest_common.sh@1045 -- # echo 0 00:28:26.936 16:45:58 -- common/autotest_common.sh@1047 -- # stubpid=148372 00:28:26.936 Waiting for stub to ready for secondary processes... 00:28:26.936 16:45:58 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:28:26.936 16:45:58 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:26.936 16:45:58 -- common/autotest_common.sh@1051 -- # [[ -e /proc/148372 ]] 00:28:26.936 16:45:58 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:28:26.936 16:45:58 -- common/autotest_common.sh@1052 -- # sleep 1s 00:28:26.936 [2024-07-13 16:45:58.163226] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:26.936 [2024-07-13 16:45:58.164109] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.872 16:45:59 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:27.872 16:45:59 -- common/autotest_common.sh@1051 -- # [[ -e /proc/148372 ]] 00:28:27.872 16:45:59 -- common/autotest_common.sh@1052 -- # sleep 1s 00:28:28.809 [2024-07-13 16:46:00.011288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:28.809 [2024-07-13 16:46:00.058996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.809 [2024-07-13 16:46:00.059192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.809 [2024-07-13 16:46:00.059183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.809 [2024-07-13 16:46:00.069898] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:28:28.809 [2024-07-13 16:46:00.081592] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:28:28.809 [2024-07-13 16:46:00.082711] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:28:28.809 16:46:00 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:28.809 done. 00:28:28.809 16:46:00 -- common/autotest_common.sh@1054 -- # echo done. 00:28:28.809 16:46:00 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:28:28.809 16:46:00 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:28:28.810 16:46:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:28.810 16:46:00 -- common/autotest_common.sh@10 -- # set +x 00:28:28.810 ************************************ 00:28:28.810 START TEST nvme_reset 00:28:28.810 ************************************ 00:28:28.810 16:46:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:28:29.068 Initializing NVMe Controllers 00:28:29.068 Skipping QEMU NVMe SSD at 0000:00:06.0 00:28:29.068 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:28:29.068 00:28:29.068 real 0m0.296s 00:28:29.068 user 0m0.124s 00:28:29.068 sys 0m0.103s 00:28:29.068 16:46:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:29.068 16:46:00 -- common/autotest_common.sh@10 -- # set +x 00:28:29.068 ************************************ 00:28:29.068 END TEST nvme_reset 00:28:29.068 ************************************ 00:28:29.068 16:46:00 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:28:29.068 16:46:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:29.068 16:46:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:29.068 16:46:00 -- common/autotest_common.sh@10 -- # set +x 00:28:29.068 ************************************ 00:28:29.068 START TEST nvme_identify 00:28:29.068 ************************************ 00:28:29.068 16:46:00 -- common/autotest_common.sh@1104 -- # nvme_identify 00:28:29.068 16:46:00 -- nvme/nvme.sh@12 -- # bdfs=() 00:28:29.068 16:46:00 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:28:29.068 16:46:00 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:28:29.068 16:46:00 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:28:29.068 16:46:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:29.068 16:46:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:29.068 16:46:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:29.068 16:46:00 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:29.068 16:46:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:29.325 16:46:00 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:29.325 16:46:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:28:29.325 16:46:00 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:28:29.325 [2024-07-13 16:46:00.792415] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 148412 terminated unexpected 00:28:29.325 ===================================================== 00:28:29.325 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:29.325 ===================================================== 00:28:29.325 Controller Capabilities/Features 00:28:29.325 ================================ 00:28:29.325 Vendor ID: 1b36 00:28:29.325 Subsystem Vendor ID: 1af4 00:28:29.325 Serial Number: 12340 00:28:29.325 Model Number: QEMU NVMe Ctrl 00:28:29.325 Firmware Version: 8.0.0 00:28:29.325 Recommended Arb Burst: 6 00:28:29.325 IEEE OUI Identifier: 00 54 52 00:28:29.325 Multi-path I/O 00:28:29.325 May have multiple subsystem ports: No 00:28:29.325 May have multiple controllers: No 00:28:29.325 Associated with SR-IOV VF: No 00:28:29.325 Max Data Transfer Size: 524288 00:28:29.325 Max Number of Namespaces: 256 00:28:29.325 Max Number of I/O Queues: 64 00:28:29.325 NVMe Specification Version (VS): 1.4 00:28:29.325 NVMe Specification Version (Identify): 1.4 00:28:29.325 Maximum Queue Entries: 2048 00:28:29.325 Contiguous Queues Required: Yes 00:28:29.325 Arbitration Mechanisms Supported 00:28:29.325 Weighted Round Robin: Not Supported 00:28:29.325 Vendor Specific: Not Supported 00:28:29.325 Reset Timeout: 7500 ms 00:28:29.325 Doorbell Stride: 4 bytes 00:28:29.325 NVM Subsystem Reset: Not Supported 00:28:29.325 Command Sets Supported 00:28:29.326 NVM Command Set: Supported 00:28:29.326 Boot Partition: Not Supported 00:28:29.326 Memory Page Size Minimum: 4096 bytes 00:28:29.326 Memory Page Size Maximum: 65536 bytes 00:28:29.326 Persistent Memory Region: Not Supported 00:28:29.326 Optional Asynchronous Events Supported 00:28:29.326 Namespace Attribute Notices: Supported 00:28:29.326 Firmware Activation Notices: Not Supported 00:28:29.326 ANA Change Notices: Not Supported 00:28:29.326 PLE Aggregate Log Change Notices: Not Supported 00:28:29.326 LBA Status Info Alert Notices: Not Supported 00:28:29.326 EGE Aggregate Log Change Notices: Not Supported 00:28:29.326 Normal NVM Subsystem Shutdown event: Not Supported 00:28:29.326 Zone Descriptor Change Notices: Not Supported 00:28:29.326 Discovery Log Change Notices: Not Supported 00:28:29.326 Controller Attributes 00:28:29.326 128-bit Host Identifier: Not Supported 00:28:29.326 Non-Operational Permissive Mode: Not Supported 00:28:29.326 NVM Sets: Not Supported 00:28:29.326 Read Recovery Levels: Not Supported 00:28:29.326 Endurance Groups: Not Supported 00:28:29.326 Predictable Latency Mode: Not Supported 00:28:29.326 Traffic Based Keep ALive: Not Supported 00:28:29.326 Namespace Granularity: Not Supported 00:28:29.326 SQ Associations: Not Supported 00:28:29.326 UUID List: Not Supported 00:28:29.326 Multi-Domain Subsystem: Not Supported 00:28:29.326 Fixed Capacity Management: Not Supported 00:28:29.326 Variable Capacity Management: Not Supported 00:28:29.326 Delete Endurance Group: Not Supported 00:28:29.326 Delete NVM Set: Not Supported 00:28:29.326 Extended LBA Formats Supported: Supported 00:28:29.326 Flexible Data Placement Supported: Not Supported 00:28:29.326 00:28:29.326 Controller Memory Buffer Support 00:28:29.326 ================================ 00:28:29.326 Supported: No 00:28:29.326 00:28:29.326 Persistent Memory Region Support 00:28:29.326 ================================ 00:28:29.326 Supported: No 00:28:29.326 00:28:29.326 Admin Command Set Attributes 00:28:29.326 ============================ 00:28:29.326 Security Send/Receive: Not Supported 00:28:29.326 Format NVM: Supported 00:28:29.326 Firmware Activate/Download: Not Supported 00:28:29.326 Namespace Management: Supported 00:28:29.326 Device Self-Test: Not Supported 00:28:29.326 Directives: Supported 00:28:29.326 NVMe-MI: Not Supported 00:28:29.326 Virtualization Management: Not Supported 00:28:29.326 Doorbell Buffer Config: Supported 00:28:29.326 Get LBA Status Capability: Not Supported 00:28:29.326 Command & Feature Lockdown Capability: Not Supported 00:28:29.326 Abort Command Limit: 4 00:28:29.326 Async Event Request Limit: 4 00:28:29.326 Number of Firmware Slots: N/A 00:28:29.326 Firmware Slot 1 Read-Only: N/A 00:28:29.326 Firmware Activation Without Reset: N/A 00:28:29.326 Multiple Update Detection Support: N/A 00:28:29.326 Firmware Update Granularity: No Information Provided 00:28:29.326 Per-Namespace SMART Log: Yes 00:28:29.326 Asymmetric Namespace Access Log Page: Not Supported 00:28:29.326 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:28:29.326 Command Effects Log Page: Supported 00:28:29.326 Get Log Page Extended Data: Supported 00:28:29.326 Telemetry Log Pages: Not Supported 00:28:29.326 Persistent Event Log Pages: Not Supported 00:28:29.326 Supported Log Pages Log Page: May Support 00:28:29.326 Commands Supported & Effects Log Page: Not Supported 00:28:29.326 Feature Identifiers & Effects Log Page:May Support 00:28:29.326 NVMe-MI Commands & Effects Log Page: May Support 00:28:29.326 Data Area 4 for Telemetry Log: Not Supported 00:28:29.326 Error Log Page Entries Supported: 1 00:28:29.326 Keep Alive: Not Supported 00:28:29.326 00:28:29.326 NVM Command Set Attributes 00:28:29.326 ========================== 00:28:29.326 Submission Queue Entry Size 00:28:29.326 Max: 64 00:28:29.326 Min: 64 00:28:29.326 Completion Queue Entry Size 00:28:29.326 Max: 16 00:28:29.326 Min: 16 00:28:29.326 Number of Namespaces: 256 00:28:29.326 Compare Command: Supported 00:28:29.326 Write Uncorrectable Command: Not Supported 00:28:29.326 Dataset Management Command: Supported 00:28:29.326 Write Zeroes Command: Supported 00:28:29.326 Set Features Save Field: Supported 00:28:29.326 Reservations: Not Supported 00:28:29.326 Timestamp: Supported 00:28:29.326 Copy: Supported 00:28:29.326 Volatile Write Cache: Present 00:28:29.326 Atomic Write Unit (Normal): 1 00:28:29.326 Atomic Write Unit (PFail): 1 00:28:29.326 Atomic Compare & Write Unit: 1 00:28:29.326 Fused Compare & Write: Not Supported 00:28:29.326 Scatter-Gather List 00:28:29.326 SGL Command Set: Supported 00:28:29.326 SGL Keyed: Not Supported 00:28:29.326 SGL Bit Bucket Descriptor: Not Supported 00:28:29.326 SGL Metadata Pointer: Not Supported 00:28:29.326 Oversized SGL: Not Supported 00:28:29.326 SGL Metadata Address: Not Supported 00:28:29.326 SGL Offset: Not Supported 00:28:29.326 Transport SGL Data Block: Not Supported 00:28:29.326 Replay Protected Memory Block: Not Supported 00:28:29.326 00:28:29.326 Firmware Slot Information 00:28:29.326 ========================= 00:28:29.326 Active slot: 1 00:28:29.326 Slot 1 Firmware Revision: 1.0 00:28:29.326 00:28:29.326 00:28:29.326 Commands Supported and Effects 00:28:29.326 ============================== 00:28:29.326 Admin Commands 00:28:29.326 -------------- 00:28:29.326 Delete I/O Submission Queue (00h): Supported 00:28:29.326 Create I/O Submission Queue (01h): Supported 00:28:29.326 Get Log Page (02h): Supported 00:28:29.326 Delete I/O Completion Queue (04h): Supported 00:28:29.326 Create I/O Completion Queue (05h): Supported 00:28:29.326 Identify (06h): Supported 00:28:29.326 Abort (08h): Supported 00:28:29.326 Set Features (09h): Supported 00:28:29.326 Get Features (0Ah): Supported 00:28:29.326 Asynchronous Event Request (0Ch): Supported 00:28:29.326 Namespace Attachment (15h): Supported NS-Inventory-Change 00:28:29.326 Directive Send (19h): Supported 00:28:29.326 Directive Receive (1Ah): Supported 00:28:29.326 Virtualization Management (1Ch): Supported 00:28:29.326 Doorbell Buffer Config (7Ch): Supported 00:28:29.326 Format NVM (80h): Supported LBA-Change 00:28:29.326 I/O Commands 00:28:29.326 ------------ 00:28:29.326 Flush (00h): Supported LBA-Change 00:28:29.326 Write (01h): Supported LBA-Change 00:28:29.326 Read (02h): Supported 00:28:29.326 Compare (05h): Supported 00:28:29.326 Write Zeroes (08h): Supported LBA-Change 00:28:29.326 Dataset Management (09h): Supported LBA-Change 00:28:29.326 Unknown (0Ch): Supported 00:28:29.326 Unknown (12h): Supported 00:28:29.326 Copy (19h): Supported LBA-Change 00:28:29.326 Unknown (1Dh): Supported LBA-Change 00:28:29.326 00:28:29.326 Error Log 00:28:29.326 ========= 00:28:29.326 00:28:29.326 Arbitration 00:28:29.326 =========== 00:28:29.326 Arbitration Burst: no limit 00:28:29.326 00:28:29.326 Power Management 00:28:29.326 ================ 00:28:29.326 Number of Power States: 1 00:28:29.326 Current Power State: Power State #0 00:28:29.326 Power State #0: 00:28:29.326 Max Power: 25.00 W 00:28:29.326 Non-Operational State: Operational 00:28:29.326 Entry Latency: 16 microseconds 00:28:29.326 Exit Latency: 4 microseconds 00:28:29.326 Relative Read Throughput: 0 00:28:29.326 Relative Read Latency: 0 00:28:29.326 Relative Write Throughput: 0 00:28:29.326 Relative Write Latency: 0 00:28:29.585 Idle Power: Not Reported 00:28:29.585 Active Power: Not Reported 00:28:29.585 Non-Operational Permissive Mode: Not Supported 00:28:29.585 00:28:29.585 Health Information 00:28:29.585 ================== 00:28:29.585 Critical Warnings: 00:28:29.585 Available Spare Space: OK 00:28:29.585 Temperature: OK 00:28:29.585 Device Reliability: OK 00:28:29.585 Read Only: No 00:28:29.585 Volatile Memory Backup: OK 00:28:29.585 Current Temperature: 323 Kelvin (50 Celsius) 00:28:29.585 Temperature Threshold: 343 Kelvin (70 Celsius) 00:28:29.585 Available Spare: 0% 00:28:29.585 Available Spare Threshold: 0% 00:28:29.585 Life Percentage Used: 0% 00:28:29.585 Data Units Read: 6141 00:28:29.585 Data Units Written: 2977 00:28:29.585 Host Read Commands: 313757 00:28:29.585 Host Write Commands: 172118 00:28:29.585 Controller Busy Time: 0 minutes 00:28:29.585 Power Cycles: 0 00:28:29.585 Power On Hours: 0 hours 00:28:29.585 Unsafe Shutdowns: 0 00:28:29.585 Unrecoverable Media Errors: 0 00:28:29.585 Lifetime Error Log Entries: 0 00:28:29.585 Warning Temperature Time: 0 minutes 00:28:29.585 Critical Temperature Time: 0 minutes 00:28:29.585 00:28:29.585 Number of Queues 00:28:29.585 ================ 00:28:29.585 Number of I/O Submission Queues: 64 00:28:29.585 Number of I/O Completion Queues: 64 00:28:29.585 00:28:29.585 ZNS Specific Controller Data 00:28:29.585 ============================ 00:28:29.585 Zone Append Size Limit: 0 00:28:29.585 00:28:29.585 00:28:29.585 Active Namespaces 00:28:29.585 ================= 00:28:29.585 Namespace ID:1 00:28:29.585 Error Recovery Timeout: Unlimited 00:28:29.585 Command Set Identifier: NVM (00h) 00:28:29.585 Deallocate: Supported 00:28:29.585 Deallocated/Unwritten Error: Supported 00:28:29.585 Deallocated Read Value: All 0x00 00:28:29.585 Deallocate in Write Zeroes: Not Supported 00:28:29.585 Deallocated Guard Field: 0xFFFF 00:28:29.585 Flush: Supported 00:28:29.585 Reservation: Not Supported 00:28:29.585 Namespace Sharing Capabilities: Private 00:28:29.585 Size (in LBAs): 1310720 (5GiB) 00:28:29.585 Capacity (in LBAs): 1310720 (5GiB) 00:28:29.585 Utilization (in LBAs): 1310720 (5GiB) 00:28:29.585 Thin Provisioning: Not Supported 00:28:29.585 Per-NS Atomic Units: No 00:28:29.585 Maximum Single Source Range Length: 128 00:28:29.585 Maximum Copy Length: 128 00:28:29.585 Maximum Source Range Count: 128 00:28:29.585 NGUID/EUI64 Never Reused: No 00:28:29.585 Namespace Write Protected: No 00:28:29.585 Number of LBA Formats: 8 00:28:29.585 Current LBA Format: LBA Format #04 00:28:29.585 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:29.585 LBA Format #01: Data Size: 512 Metadata Size: 8 00:28:29.585 LBA Format #02: Data Size: 512 Metadata Size: 16 00:28:29.585 LBA Format #03: Data Size: 512 Metadata Size: 64 00:28:29.585 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:28:29.585 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:28:29.585 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:28:29.585 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:28:29.585 00:28:29.585 16:46:00 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:28:29.585 16:46:00 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:28:29.845 ===================================================== 00:28:29.845 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:29.845 ===================================================== 00:28:29.845 Controller Capabilities/Features 00:28:29.845 ================================ 00:28:29.845 Vendor ID: 1b36 00:28:29.845 Subsystem Vendor ID: 1af4 00:28:29.845 Serial Number: 12340 00:28:29.845 Model Number: QEMU NVMe Ctrl 00:28:29.845 Firmware Version: 8.0.0 00:28:29.845 Recommended Arb Burst: 6 00:28:29.845 IEEE OUI Identifier: 00 54 52 00:28:29.845 Multi-path I/O 00:28:29.845 May have multiple subsystem ports: No 00:28:29.845 May have multiple controllers: No 00:28:29.845 Associated with SR-IOV VF: No 00:28:29.845 Max Data Transfer Size: 524288 00:28:29.845 Max Number of Namespaces: 256 00:28:29.845 Max Number of I/O Queues: 64 00:28:29.845 NVMe Specification Version (VS): 1.4 00:28:29.845 NVMe Specification Version (Identify): 1.4 00:28:29.845 Maximum Queue Entries: 2048 00:28:29.845 Contiguous Queues Required: Yes 00:28:29.845 Arbitration Mechanisms Supported 00:28:29.845 Weighted Round Robin: Not Supported 00:28:29.845 Vendor Specific: Not Supported 00:28:29.845 Reset Timeout: 7500 ms 00:28:29.845 Doorbell Stride: 4 bytes 00:28:29.845 NVM Subsystem Reset: Not Supported 00:28:29.845 Command Sets Supported 00:28:29.845 NVM Command Set: Supported 00:28:29.845 Boot Partition: Not Supported 00:28:29.845 Memory Page Size Minimum: 4096 bytes 00:28:29.845 Memory Page Size Maximum: 65536 bytes 00:28:29.845 Persistent Memory Region: Not Supported 00:28:29.845 Optional Asynchronous Events Supported 00:28:29.845 Namespace Attribute Notices: Supported 00:28:29.845 Firmware Activation Notices: Not Supported 00:28:29.845 ANA Change Notices: Not Supported 00:28:29.845 PLE Aggregate Log Change Notices: Not Supported 00:28:29.845 LBA Status Info Alert Notices: Not Supported 00:28:29.845 EGE Aggregate Log Change Notices: Not Supported 00:28:29.845 Normal NVM Subsystem Shutdown event: Not Supported 00:28:29.845 Zone Descriptor Change Notices: Not Supported 00:28:29.845 Discovery Log Change Notices: Not Supported 00:28:29.845 Controller Attributes 00:28:29.845 128-bit Host Identifier: Not Supported 00:28:29.845 Non-Operational Permissive Mode: Not Supported 00:28:29.845 NVM Sets: Not Supported 00:28:29.845 Read Recovery Levels: Not Supported 00:28:29.845 Endurance Groups: Not Supported 00:28:29.845 Predictable Latency Mode: Not Supported 00:28:29.845 Traffic Based Keep ALive: Not Supported 00:28:29.845 Namespace Granularity: Not Supported 00:28:29.845 SQ Associations: Not Supported 00:28:29.845 UUID List: Not Supported 00:28:29.845 Multi-Domain Subsystem: Not Supported 00:28:29.845 Fixed Capacity Management: Not Supported 00:28:29.845 Variable Capacity Management: Not Supported 00:28:29.845 Delete Endurance Group: Not Supported 00:28:29.845 Delete NVM Set: Not Supported 00:28:29.845 Extended LBA Formats Supported: Supported 00:28:29.845 Flexible Data Placement Supported: Not Supported 00:28:29.845 00:28:29.845 Controller Memory Buffer Support 00:28:29.845 ================================ 00:28:29.845 Supported: No 00:28:29.845 00:28:29.845 Persistent Memory Region Support 00:28:29.845 ================================ 00:28:29.845 Supported: No 00:28:29.845 00:28:29.845 Admin Command Set Attributes 00:28:29.845 ============================ 00:28:29.845 Security Send/Receive: Not Supported 00:28:29.845 Format NVM: Supported 00:28:29.845 Firmware Activate/Download: Not Supported 00:28:29.845 Namespace Management: Supported 00:28:29.845 Device Self-Test: Not Supported 00:28:29.845 Directives: Supported 00:28:29.845 NVMe-MI: Not Supported 00:28:29.845 Virtualization Management: Not Supported 00:28:29.845 Doorbell Buffer Config: Supported 00:28:29.845 Get LBA Status Capability: Not Supported 00:28:29.845 Command & Feature Lockdown Capability: Not Supported 00:28:29.845 Abort Command Limit: 4 00:28:29.845 Async Event Request Limit: 4 00:28:29.845 Number of Firmware Slots: N/A 00:28:29.845 Firmware Slot 1 Read-Only: N/A 00:28:29.845 Firmware Activation Without Reset: N/A 00:28:29.845 Multiple Update Detection Support: N/A 00:28:29.845 Firmware Update Granularity: No Information Provided 00:28:29.845 Per-Namespace SMART Log: Yes 00:28:29.845 Asymmetric Namespace Access Log Page: Not Supported 00:28:29.845 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:28:29.845 Command Effects Log Page: Supported 00:28:29.845 Get Log Page Extended Data: Supported 00:28:29.845 Telemetry Log Pages: Not Supported 00:28:29.845 Persistent Event Log Pages: Not Supported 00:28:29.845 Supported Log Pages Log Page: May Support 00:28:29.845 Commands Supported & Effects Log Page: Not Supported 00:28:29.845 Feature Identifiers & Effects Log Page:May Support 00:28:29.845 NVMe-MI Commands & Effects Log Page: May Support 00:28:29.845 Data Area 4 for Telemetry Log: Not Supported 00:28:29.845 Error Log Page Entries Supported: 1 00:28:29.845 Keep Alive: Not Supported 00:28:29.845 00:28:29.845 NVM Command Set Attributes 00:28:29.845 ========================== 00:28:29.845 Submission Queue Entry Size 00:28:29.845 Max: 64 00:28:29.845 Min: 64 00:28:29.845 Completion Queue Entry Size 00:28:29.845 Max: 16 00:28:29.845 Min: 16 00:28:29.845 Number of Namespaces: 256 00:28:29.845 Compare Command: Supported 00:28:29.845 Write Uncorrectable Command: Not Supported 00:28:29.845 Dataset Management Command: Supported 00:28:29.845 Write Zeroes Command: Supported 00:28:29.845 Set Features Save Field: Supported 00:28:29.845 Reservations: Not Supported 00:28:29.845 Timestamp: Supported 00:28:29.845 Copy: Supported 00:28:29.845 Volatile Write Cache: Present 00:28:29.845 Atomic Write Unit (Normal): 1 00:28:29.845 Atomic Write Unit (PFail): 1 00:28:29.845 Atomic Compare & Write Unit: 1 00:28:29.845 Fused Compare & Write: Not Supported 00:28:29.845 Scatter-Gather List 00:28:29.845 SGL Command Set: Supported 00:28:29.845 SGL Keyed: Not Supported 00:28:29.845 SGL Bit Bucket Descriptor: Not Supported 00:28:29.845 SGL Metadata Pointer: Not Supported 00:28:29.845 Oversized SGL: Not Supported 00:28:29.845 SGL Metadata Address: Not Supported 00:28:29.845 SGL Offset: Not Supported 00:28:29.845 Transport SGL Data Block: Not Supported 00:28:29.845 Replay Protected Memory Block: Not Supported 00:28:29.845 00:28:29.845 Firmware Slot Information 00:28:29.845 ========================= 00:28:29.845 Active slot: 1 00:28:29.845 Slot 1 Firmware Revision: 1.0 00:28:29.845 00:28:29.845 00:28:29.845 Commands Supported and Effects 00:28:29.845 ============================== 00:28:29.845 Admin Commands 00:28:29.845 -------------- 00:28:29.846 Delete I/O Submission Queue (00h): Supported 00:28:29.846 Create I/O Submission Queue (01h): Supported 00:28:29.846 Get Log Page (02h): Supported 00:28:29.846 Delete I/O Completion Queue (04h): Supported 00:28:29.846 Create I/O Completion Queue (05h): Supported 00:28:29.846 Identify (06h): Supported 00:28:29.846 Abort (08h): Supported 00:28:29.846 Set Features (09h): Supported 00:28:29.846 Get Features (0Ah): Supported 00:28:29.846 Asynchronous Event Request (0Ch): Supported 00:28:29.846 Namespace Attachment (15h): Supported NS-Inventory-Change 00:28:29.846 Directive Send (19h): Supported 00:28:29.846 Directive Receive (1Ah): Supported 00:28:29.846 Virtualization Management (1Ch): Supported 00:28:29.846 Doorbell Buffer Config (7Ch): Supported 00:28:29.846 Format NVM (80h): Supported LBA-Change 00:28:29.846 I/O Commands 00:28:29.846 ------------ 00:28:29.846 Flush (00h): Supported LBA-Change 00:28:29.846 Write (01h): Supported LBA-Change 00:28:29.846 Read (02h): Supported 00:28:29.846 Compare (05h): Supported 00:28:29.846 Write Zeroes (08h): Supported LBA-Change 00:28:29.846 Dataset Management (09h): Supported LBA-Change 00:28:29.846 Unknown (0Ch): Supported 00:28:29.846 Unknown (12h): Supported 00:28:29.846 Copy (19h): Supported LBA-Change 00:28:29.846 Unknown (1Dh): Supported LBA-Change 00:28:29.846 00:28:29.846 Error Log 00:28:29.846 ========= 00:28:29.846 00:28:29.846 Arbitration 00:28:29.846 =========== 00:28:29.846 Arbitration Burst: no limit 00:28:29.846 00:28:29.846 Power Management 00:28:29.846 ================ 00:28:29.846 Number of Power States: 1 00:28:29.846 Current Power State: Power State #0 00:28:29.846 Power State #0: 00:28:29.846 Max Power: 25.00 W 00:28:29.846 Non-Operational State: Operational 00:28:29.846 Entry Latency: 16 microseconds 00:28:29.846 Exit Latency: 4 microseconds 00:28:29.846 Relative Read Throughput: 0 00:28:29.846 Relative Read Latency: 0 00:28:29.846 Relative Write Throughput: 0 00:28:29.846 Relative Write Latency: 0 00:28:29.846 Idle Power: Not Reported 00:28:29.846 Active Power: Not Reported 00:28:29.846 Non-Operational Permissive Mode: Not Supported 00:28:29.846 00:28:29.846 Health Information 00:28:29.846 ================== 00:28:29.846 Critical Warnings: 00:28:29.846 Available Spare Space: OK 00:28:29.846 Temperature: OK 00:28:29.846 Device Reliability: OK 00:28:29.846 Read Only: No 00:28:29.846 Volatile Memory Backup: OK 00:28:29.846 Current Temperature: 323 Kelvin (50 Celsius) 00:28:29.846 Temperature Threshold: 343 Kelvin (70 Celsius) 00:28:29.846 Available Spare: 0% 00:28:29.846 Available Spare Threshold: 0% 00:28:29.846 Life Percentage Used: 0% 00:28:29.846 Data Units Read: 6141 00:28:29.846 Data Units Written: 2977 00:28:29.846 Host Read Commands: 313757 00:28:29.846 Host Write Commands: 172118 00:28:29.846 Controller Busy Time: 0 minutes 00:28:29.846 Power Cycles: 0 00:28:29.846 Power On Hours: 0 hours 00:28:29.846 Unsafe Shutdowns: 0 00:28:29.846 Unrecoverable Media Errors: 0 00:28:29.846 Lifetime Error Log Entries: 0 00:28:29.846 Warning Temperature Time: 0 minutes 00:28:29.846 Critical Temperature Time: 0 minutes 00:28:29.846 00:28:29.846 Number of Queues 00:28:29.846 ================ 00:28:29.846 Number of I/O Submission Queues: 64 00:28:29.846 Number of I/O Completion Queues: 64 00:28:29.846 00:28:29.846 ZNS Specific Controller Data 00:28:29.846 ============================ 00:28:29.846 Zone Append Size Limit: 0 00:28:29.846 00:28:29.846 00:28:29.846 Active Namespaces 00:28:29.846 ================= 00:28:29.846 Namespace ID:1 00:28:29.846 Error Recovery Timeout: Unlimited 00:28:29.846 Command Set Identifier: NVM (00h) 00:28:29.846 Deallocate: Supported 00:28:29.846 Deallocated/Unwritten Error: Supported 00:28:29.846 Deallocated Read Value: All 0x00 00:28:29.846 Deallocate in Write Zeroes: Not Supported 00:28:29.846 Deallocated Guard Field: 0xFFFF 00:28:29.846 Flush: Supported 00:28:29.846 Reservation: Not Supported 00:28:29.846 Namespace Sharing Capabilities: Private 00:28:29.846 Size (in LBAs): 1310720 (5GiB) 00:28:29.846 Capacity (in LBAs): 1310720 (5GiB) 00:28:29.846 Utilization (in LBAs): 1310720 (5GiB) 00:28:29.846 Thin Provisioning: Not Supported 00:28:29.846 Per-NS Atomic Units: No 00:28:29.846 Maximum Single Source Range Length: 128 00:28:29.846 Maximum Copy Length: 128 00:28:29.846 Maximum Source Range Count: 128 00:28:29.846 NGUID/EUI64 Never Reused: No 00:28:29.846 Namespace Write Protected: No 00:28:29.846 Number of LBA Formats: 8 00:28:29.846 Current LBA Format: LBA Format #04 00:28:29.846 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:29.846 LBA Format #01: Data Size: 512 Metadata Size: 8 00:28:29.846 LBA Format #02: Data Size: 512 Metadata Size: 16 00:28:29.846 LBA Format #03: Data Size: 512 Metadata Size: 64 00:28:29.846 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:28:29.846 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:28:29.846 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:28:29.846 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:28:29.846 00:28:29.846 00:28:29.846 real 0m0.619s 00:28:29.846 user 0m0.251s 00:28:29.846 sys 0m0.293s 00:28:29.846 16:46:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:29.846 16:46:01 -- common/autotest_common.sh@10 -- # set +x 00:28:29.846 ************************************ 00:28:29.846 END TEST nvme_identify 00:28:29.846 ************************************ 00:28:29.846 16:46:01 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:28:29.846 16:46:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:29.846 16:46:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:29.846 16:46:01 -- common/autotest_common.sh@10 -- # set +x 00:28:29.846 ************************************ 00:28:29.846 START TEST nvme_perf 00:28:29.846 ************************************ 00:28:29.846 16:46:01 -- common/autotest_common.sh@1104 -- # nvme_perf 00:28:29.846 16:46:01 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:28:31.224 Initializing NVMe Controllers 00:28:31.224 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:31.224 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:31.224 Initialization complete. Launching workers. 00:28:31.224 ======================================================== 00:28:31.224 Latency(us) 00:28:31.224 Device Information : IOPS MiB/s Average min max 00:28:31.224 PCIE (0000:00:06.0) NSID 1 from core 0: 52736.00 618.00 2427.21 1323.49 5227.09 00:28:31.224 ======================================================== 00:28:31.224 Total : 52736.00 618.00 2427.21 1323.49 5227.09 00:28:31.224 00:28:31.224 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:31.224 ================================================================================= 00:28:31.224 1.00000% : 1490.164us 00:28:31.224 10.00000% : 1700.815us 00:28:31.224 25.00000% : 1966.080us 00:28:31.224 50.00000% : 2434.194us 00:28:31.224 75.00000% : 2886.705us 00:28:31.224 90.00000% : 3136.366us 00:28:31.224 95.00000% : 3229.989us 00:28:31.224 98.00000% : 3370.423us 00:28:31.224 99.00000% : 3557.669us 00:28:31.224 99.50000% : 3994.575us 00:28:31.224 99.90000% : 4525.105us 00:28:31.224 99.99000% : 5118.050us 00:28:31.224 99.99900% : 5242.880us 00:28:31.224 99.99990% : 5242.880us 00:28:31.224 99.99999% : 5242.880us 00:28:31.224 00:28:31.224 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:31.224 ============================================================================== 00:28:31.224 Range in us Cumulative IO count 00:28:31.224 1318.522 - 1326.324: 0.0019% ( 1) 00:28:31.224 1334.126 - 1341.928: 0.0114% ( 5) 00:28:31.224 1341.928 - 1349.730: 0.0209% ( 5) 00:28:31.224 1349.730 - 1357.531: 0.0303% ( 5) 00:28:31.224 1357.531 - 1365.333: 0.0341% ( 2) 00:28:31.224 1365.333 - 1373.135: 0.0512% ( 9) 00:28:31.224 1373.135 - 1380.937: 0.0588% ( 4) 00:28:31.224 1380.937 - 1388.739: 0.0683% ( 5) 00:28:31.224 1388.739 - 1396.541: 0.0910% ( 12) 00:28:31.224 1396.541 - 1404.343: 0.1043% ( 7) 00:28:31.224 1404.343 - 1412.145: 0.1327% ( 15) 00:28:31.224 1412.145 - 1419.947: 0.1688% ( 19) 00:28:31.224 1419.947 - 1427.749: 0.2143% ( 24) 00:28:31.224 1427.749 - 1435.550: 0.2674% ( 28) 00:28:31.224 1435.550 - 1443.352: 0.3508% ( 44) 00:28:31.224 1443.352 - 1451.154: 0.4418% ( 48) 00:28:31.224 1451.154 - 1458.956: 0.5423% ( 53) 00:28:31.224 1458.956 - 1466.758: 0.6504% ( 57) 00:28:31.224 1466.758 - 1474.560: 0.7604% ( 58) 00:28:31.224 1474.560 - 1482.362: 0.8931% ( 70) 00:28:31.224 1482.362 - 1490.164: 1.0638% ( 90) 00:28:31.224 1490.164 - 1497.966: 1.2572% ( 102) 00:28:31.224 1497.966 - 1505.768: 1.4582% ( 106) 00:28:31.224 1505.768 - 1513.570: 1.6497% ( 101) 00:28:31.224 1513.570 - 1521.371: 1.8811% ( 122) 00:28:31.224 1521.371 - 1529.173: 2.1200% ( 126) 00:28:31.224 1529.173 - 1536.975: 2.4101% ( 153) 00:28:31.224 1536.975 - 1544.777: 2.6794% ( 142) 00:28:31.224 1544.777 - 1552.579: 2.9657% ( 151) 00:28:31.224 1552.579 - 1560.381: 3.3089% ( 181) 00:28:31.224 1560.381 - 1568.183: 3.6408% ( 175) 00:28:31.224 1568.183 - 1575.985: 3.9499% ( 163) 00:28:31.224 1575.985 - 1583.787: 4.2950% ( 182) 00:28:31.224 1583.787 - 1591.589: 4.6249% ( 174) 00:28:31.224 1591.589 - 1599.390: 4.9776% ( 186) 00:28:31.224 1599.390 - 1607.192: 5.3398% ( 191) 00:28:31.224 1607.192 - 1614.994: 5.7077% ( 194) 00:28:31.224 1614.994 - 1622.796: 6.0983% ( 206) 00:28:31.224 1622.796 - 1630.598: 6.4738% ( 198) 00:28:31.224 1630.598 - 1638.400: 6.8302% ( 188) 00:28:31.224 1638.400 - 1646.202: 7.2190% ( 205) 00:28:31.224 1646.202 - 1654.004: 7.5887% ( 195) 00:28:31.224 1654.004 - 1661.806: 7.9851% ( 209) 00:28:31.224 1661.806 - 1669.608: 8.4041% ( 221) 00:28:31.224 1669.608 - 1677.410: 8.7587% ( 187) 00:28:31.224 1677.410 - 1685.211: 9.1797% ( 222) 00:28:31.224 1685.211 - 1693.013: 9.5893% ( 216) 00:28:31.224 1693.013 - 1700.815: 10.0178% ( 226) 00:28:31.224 1700.815 - 1708.617: 10.4407% ( 223) 00:28:31.224 1708.617 - 1716.419: 10.8541% ( 218) 00:28:31.224 1716.419 - 1724.221: 11.3433% ( 258) 00:28:31.224 1724.221 - 1732.023: 11.8003% ( 241) 00:28:31.224 1732.023 - 1739.825: 12.2156% ( 219) 00:28:31.225 1739.825 - 1747.627: 12.7048% ( 258) 00:28:31.225 1747.627 - 1755.429: 13.1296% ( 224) 00:28:31.225 1755.429 - 1763.230: 13.6055% ( 251) 00:28:31.225 1763.230 - 1771.032: 14.0303% ( 224) 00:28:31.225 1771.032 - 1778.834: 14.4588% ( 226) 00:28:31.225 1778.834 - 1786.636: 14.9234% ( 245) 00:28:31.225 1786.636 - 1794.438: 15.3500% ( 225) 00:28:31.225 1794.438 - 1802.240: 15.7938% ( 234) 00:28:31.225 1802.240 - 1810.042: 16.2546% ( 243) 00:28:31.225 1810.042 - 1817.844: 16.6812% ( 225) 00:28:31.225 1817.844 - 1825.646: 17.1572% ( 251) 00:28:31.225 1825.646 - 1833.448: 17.6066% ( 237) 00:28:31.225 1833.448 - 1841.250: 18.0484% ( 233) 00:28:31.225 1841.250 - 1849.051: 18.5073% ( 242) 00:28:31.225 1849.051 - 1856.853: 18.9453% ( 231) 00:28:31.225 1856.853 - 1864.655: 19.3909% ( 235) 00:28:31.225 1864.655 - 1872.457: 19.8346% ( 234) 00:28:31.225 1872.457 - 1880.259: 20.3068% ( 249) 00:28:31.225 1880.259 - 1888.061: 20.7429% ( 230) 00:28:31.225 1888.061 - 1895.863: 21.1961% ( 239) 00:28:31.225 1895.863 - 1903.665: 21.6209% ( 224) 00:28:31.225 1903.665 - 1911.467: 22.0589% ( 231) 00:28:31.225 1911.467 - 1919.269: 22.5083% ( 237) 00:28:31.225 1919.269 - 1927.070: 22.9426% ( 229) 00:28:31.225 1927.070 - 1934.872: 23.3901% ( 236) 00:28:31.225 1934.872 - 1942.674: 23.8167% ( 225) 00:28:31.225 1942.674 - 1950.476: 24.2491% ( 228) 00:28:31.225 1950.476 - 1958.278: 24.7061% ( 241) 00:28:31.225 1958.278 - 1966.080: 25.1100% ( 213) 00:28:31.225 1966.080 - 1973.882: 25.5480% ( 231) 00:28:31.225 1973.882 - 1981.684: 25.9974% ( 237) 00:28:31.225 1981.684 - 1989.486: 26.4563% ( 242) 00:28:31.225 1989.486 - 1997.288: 26.8962% ( 232) 00:28:31.225 1997.288 - 2012.891: 27.7382% ( 444) 00:28:31.225 2012.891 - 2028.495: 28.6142% ( 462) 00:28:31.225 2028.495 - 2044.099: 29.4865% ( 460) 00:28:31.225 2044.099 - 2059.703: 30.3645% ( 463) 00:28:31.225 2059.703 - 2075.307: 31.2140% ( 448) 00:28:31.225 2075.307 - 2090.910: 32.0616% ( 447) 00:28:31.225 2090.910 - 2106.514: 32.9263% ( 456) 00:28:31.225 2106.514 - 2122.118: 33.7682% ( 444) 00:28:31.225 2122.118 - 2137.722: 34.6215% ( 450) 00:28:31.225 2137.722 - 2153.326: 35.4521% ( 438) 00:28:31.225 2153.326 - 2168.930: 36.3111% ( 453) 00:28:31.225 2168.930 - 2184.533: 37.1530% ( 444) 00:28:31.225 2184.533 - 2200.137: 37.9665% ( 429) 00:28:31.225 2200.137 - 2215.741: 38.8141% ( 447) 00:28:31.225 2215.741 - 2231.345: 39.6883% ( 461) 00:28:31.225 2231.345 - 2246.949: 40.5302% ( 444) 00:28:31.225 2246.949 - 2262.552: 41.3437% ( 429) 00:28:31.225 2262.552 - 2278.156: 42.1837% ( 443) 00:28:31.225 2278.156 - 2293.760: 43.0332% ( 448) 00:28:31.225 2293.760 - 2309.364: 43.8752% ( 444) 00:28:31.225 2309.364 - 2324.968: 44.7190% ( 445) 00:28:31.225 2324.968 - 2340.571: 45.5495% ( 438) 00:28:31.225 2340.571 - 2356.175: 46.3725% ( 434) 00:28:31.225 2356.175 - 2371.779: 47.2068% ( 440) 00:28:31.225 2371.779 - 2387.383: 48.0677% ( 454) 00:28:31.225 2387.383 - 2402.987: 48.8831% ( 430) 00:28:31.225 2402.987 - 2418.590: 49.7402% ( 452) 00:28:31.225 2418.590 - 2434.194: 50.5897% ( 448) 00:28:31.225 2434.194 - 2449.798: 51.4355% ( 446) 00:28:31.225 2449.798 - 2465.402: 52.2831% ( 447) 00:28:31.225 2465.402 - 2481.006: 53.1041% ( 433) 00:28:31.225 2481.006 - 2496.610: 53.9518% ( 447) 00:28:31.225 2496.610 - 2512.213: 54.7994% ( 447) 00:28:31.225 2512.213 - 2527.817: 55.6280% ( 437) 00:28:31.225 2527.817 - 2543.421: 56.4946% ( 457) 00:28:31.225 2543.421 - 2559.025: 57.3119% ( 431) 00:28:31.225 2559.025 - 2574.629: 58.1500% ( 442) 00:28:31.225 2574.629 - 2590.232: 59.0033% ( 450) 00:28:31.225 2590.232 - 2605.836: 59.8529% ( 448) 00:28:31.225 2605.836 - 2621.440: 60.7156% ( 455) 00:28:31.225 2621.440 - 2637.044: 61.5557% ( 443) 00:28:31.225 2637.044 - 2652.648: 62.4697% ( 482) 00:28:31.225 2652.648 - 2668.251: 63.3381% ( 458) 00:28:31.225 2668.251 - 2683.855: 64.2370% ( 474) 00:28:31.225 2683.855 - 2699.459: 65.0959% ( 453) 00:28:31.225 2699.459 - 2715.063: 65.9644% ( 458) 00:28:31.225 2715.063 - 2730.667: 66.8443% ( 464) 00:28:31.225 2730.667 - 2746.270: 67.7241% ( 464) 00:28:31.225 2746.270 - 2761.874: 68.6021% ( 463) 00:28:31.225 2761.874 - 2777.478: 69.4516% ( 448) 00:28:31.225 2777.478 - 2793.082: 70.3409% ( 469) 00:28:31.225 2793.082 - 2808.686: 71.2094% ( 458) 00:28:31.225 2808.686 - 2824.290: 72.0969% ( 468) 00:28:31.225 2824.290 - 2839.893: 72.9578% ( 454) 00:28:31.225 2839.893 - 2855.497: 73.8490% ( 470) 00:28:31.225 2855.497 - 2871.101: 74.7213% ( 460) 00:28:31.225 2871.101 - 2886.705: 75.6144% ( 471) 00:28:31.225 2886.705 - 2902.309: 76.5132% ( 474) 00:28:31.225 2902.309 - 2917.912: 77.4044% ( 470) 00:28:31.225 2917.912 - 2933.516: 78.3184% ( 482) 00:28:31.225 2933.516 - 2949.120: 79.2305% ( 481) 00:28:31.225 2949.120 - 2964.724: 80.1426% ( 481) 00:28:31.225 2964.724 - 2980.328: 81.0755% ( 492) 00:28:31.225 2980.328 - 2995.931: 82.0028% ( 489) 00:28:31.225 2995.931 - 3011.535: 82.9282% ( 488) 00:28:31.225 3011.535 - 3027.139: 83.8630% ( 493) 00:28:31.225 3027.139 - 3042.743: 84.7770% ( 482) 00:28:31.225 3042.743 - 3058.347: 85.7175% ( 496) 00:28:31.225 3058.347 - 3073.950: 86.6391% ( 486) 00:28:31.225 3073.950 - 3089.554: 87.5417% ( 476) 00:28:31.225 3089.554 - 3105.158: 88.4614% ( 485) 00:28:31.225 3105.158 - 3120.762: 89.3905% ( 490) 00:28:31.225 3120.762 - 3136.366: 90.3045% ( 482) 00:28:31.225 3136.366 - 3151.970: 91.1901% ( 467) 00:28:31.225 3151.970 - 3167.573: 92.0415% ( 449) 00:28:31.225 3167.573 - 3183.177: 92.8891% ( 447) 00:28:31.225 3183.177 - 3198.781: 93.6723% ( 413) 00:28:31.225 3198.781 - 3214.385: 94.3985% ( 383) 00:28:31.225 3214.385 - 3229.989: 95.0224% ( 329) 00:28:31.225 3229.989 - 3245.592: 95.5609% ( 284) 00:28:31.225 3245.592 - 3261.196: 96.1089% ( 289) 00:28:31.225 3261.196 - 3276.800: 96.5337% ( 224) 00:28:31.225 3276.800 - 3292.404: 96.9015% ( 194) 00:28:31.225 3292.404 - 3308.008: 97.2030% ( 159) 00:28:31.225 3308.008 - 3323.611: 97.5083% ( 161) 00:28:31.225 3323.611 - 3339.215: 97.7662% ( 136) 00:28:31.225 3339.215 - 3354.819: 97.9843% ( 115) 00:28:31.225 3354.819 - 3370.423: 98.1758% ( 101) 00:28:31.225 3370.423 - 3386.027: 98.3522% ( 93) 00:28:31.225 3386.027 - 3401.630: 98.4944% ( 75) 00:28:31.225 3401.630 - 3417.234: 98.6176% ( 65) 00:28:31.225 3417.234 - 3432.838: 98.7068% ( 47) 00:28:31.225 3432.838 - 3448.442: 98.7750% ( 36) 00:28:31.225 3448.442 - 3464.046: 98.8300% ( 29) 00:28:31.225 3464.046 - 3479.650: 98.8736% ( 23) 00:28:31.225 3479.650 - 3495.253: 98.9097% ( 19) 00:28:31.225 3495.253 - 3510.857: 98.9400% ( 16) 00:28:31.225 3510.857 - 3526.461: 98.9703% ( 16) 00:28:31.225 3526.461 - 3542.065: 98.9988% ( 15) 00:28:31.225 3542.065 - 3557.669: 99.0234% ( 13) 00:28:31.225 3557.669 - 3573.272: 99.0462% ( 12) 00:28:31.225 3573.272 - 3588.876: 99.0727% ( 14) 00:28:31.225 3588.876 - 3604.480: 99.0955% ( 12) 00:28:31.225 3604.480 - 3620.084: 99.1145% ( 10) 00:28:31.225 3620.084 - 3635.688: 99.1353% ( 11) 00:28:31.225 3635.688 - 3651.291: 99.1524% ( 9) 00:28:31.225 3651.291 - 3666.895: 99.1751% ( 12) 00:28:31.225 3666.895 - 3682.499: 99.1922% ( 9) 00:28:31.225 3682.499 - 3698.103: 99.2074% ( 8) 00:28:31.225 3698.103 - 3713.707: 99.2320% ( 13) 00:28:31.225 3713.707 - 3729.310: 99.2491% ( 9) 00:28:31.225 3729.310 - 3744.914: 99.2624% ( 7) 00:28:31.225 3744.914 - 3760.518: 99.2813% ( 10) 00:28:31.225 3760.518 - 3776.122: 99.3003% ( 10) 00:28:31.225 3776.122 - 3791.726: 99.3136% ( 7) 00:28:31.225 3791.726 - 3807.330: 99.3287% ( 8) 00:28:31.225 3807.330 - 3822.933: 99.3477% ( 10) 00:28:31.225 3822.933 - 3838.537: 99.3610% ( 7) 00:28:31.225 3838.537 - 3854.141: 99.3780% ( 9) 00:28:31.225 3854.141 - 3869.745: 99.3970% ( 10) 00:28:31.225 3869.745 - 3885.349: 99.4103% ( 7) 00:28:31.225 3885.349 - 3900.952: 99.4235% ( 7) 00:28:31.225 3900.952 - 3916.556: 99.4406% ( 9) 00:28:31.225 3916.556 - 3932.160: 99.4520% ( 6) 00:28:31.225 3932.160 - 3947.764: 99.4653% ( 7) 00:28:31.225 3947.764 - 3963.368: 99.4766% ( 6) 00:28:31.225 3963.368 - 3978.971: 99.4899% ( 7) 00:28:31.225 3978.971 - 3994.575: 99.5013% ( 6) 00:28:31.225 3994.575 - 4025.783: 99.5297% ( 15) 00:28:31.225 4025.783 - 4056.990: 99.5563% ( 14) 00:28:31.225 4056.990 - 4088.198: 99.5847% ( 15) 00:28:31.225 4088.198 - 4119.406: 99.6113% ( 14) 00:28:31.225 4119.406 - 4150.613: 99.6359% ( 13) 00:28:31.225 4150.613 - 4181.821: 99.6663% ( 16) 00:28:31.225 4181.821 - 4213.029: 99.6947% ( 15) 00:28:31.225 4213.029 - 4244.236: 99.7213% ( 14) 00:28:31.225 4244.236 - 4275.444: 99.7497% ( 15) 00:28:31.225 4275.444 - 4306.651: 99.7762% ( 14) 00:28:31.225 4306.651 - 4337.859: 99.8028% ( 14) 00:28:31.225 4337.859 - 4369.067: 99.8312% ( 15) 00:28:31.225 4369.067 - 4400.274: 99.8540% ( 12) 00:28:31.225 4400.274 - 4431.482: 99.8730% ( 10) 00:28:31.225 4431.482 - 4462.690: 99.8862% ( 7) 00:28:31.225 4462.690 - 4493.897: 99.8995% ( 7) 00:28:31.225 4493.897 - 4525.105: 99.9109% ( 6) 00:28:31.225 4525.105 - 4556.312: 99.9204% ( 5) 00:28:31.225 4556.312 - 4587.520: 99.9279% ( 4) 00:28:31.225 4587.520 - 4618.728: 99.9336% ( 3) 00:28:31.225 4618.728 - 4649.935: 99.9355% ( 1) 00:28:31.225 4649.935 - 4681.143: 99.9393% ( 2) 00:28:31.225 4681.143 - 4712.350: 99.9450% ( 3) 00:28:31.225 4712.350 - 4743.558: 99.9469% ( 1) 00:28:31.225 4743.558 - 4774.766: 99.9507% ( 2) 00:28:31.225 4774.766 - 4805.973: 99.9545% ( 2) 00:28:31.225 4805.973 - 4837.181: 99.9583% ( 2) 00:28:31.225 4837.181 - 4868.389: 99.9621% ( 2) 00:28:31.225 4868.389 - 4899.596: 99.9659% ( 2) 00:28:31.225 4899.596 - 4930.804: 99.9697% ( 2) 00:28:31.225 4930.804 - 4962.011: 99.9735% ( 2) 00:28:31.225 4962.011 - 4993.219: 99.9772% ( 2) 00:28:31.226 4993.219 - 5024.427: 99.9791% ( 1) 00:28:31.226 5024.427 - 5055.634: 99.9829% ( 2) 00:28:31.226 5055.634 - 5086.842: 99.9867% ( 2) 00:28:31.226 5086.842 - 5118.050: 99.9905% ( 2) 00:28:31.226 5118.050 - 5149.257: 99.9943% ( 2) 00:28:31.226 5149.257 - 5180.465: 99.9981% ( 2) 00:28:31.226 5211.672 - 5242.880: 100.0000% ( 1) 00:28:31.226 00:28:31.226 16:46:02 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:28:32.608 Initializing NVMe Controllers 00:28:32.608 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:32.608 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:32.608 Initialization complete. Launching workers. 00:28:32.608 ======================================================== 00:28:32.608 Latency(us) 00:28:32.608 Device Information : IOPS MiB/s Average min max 00:28:32.608 PCIE (0000:00:06.0) NSID 1 from core 0: 61221.35 717.44 2092.31 966.94 10383.69 00:28:32.608 ======================================================== 00:28:32.608 Total : 61221.35 717.44 2092.31 966.94 10383.69 00:28:32.608 00:28:32.608 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:32.608 ================================================================================= 00:28:32.608 1.00000% : 1474.560us 00:28:32.608 10.00000% : 1685.211us 00:28:32.608 25.00000% : 1817.844us 00:28:32.608 50.00000% : 2028.495us 00:28:32.608 75.00000% : 2309.364us 00:28:32.608 90.00000% : 2590.232us 00:28:32.608 95.00000% : 2839.893us 00:28:32.608 98.00000% : 3136.366us 00:28:32.608 99.00000% : 3292.404us 00:28:32.608 99.50000% : 3448.442us 00:28:32.608 99.90000% : 4244.236us 00:28:32.608 99.99000% : 10173.684us 00:28:32.608 99.99900% : 10423.345us 00:28:32.608 99.99990% : 10423.345us 00:28:32.608 99.99999% : 10423.345us 00:28:32.608 00:28:32.608 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:32.608 ============================================================================== 00:28:32.608 Range in us Cumulative IO count 00:28:32.608 963.535 - 967.436: 0.0016% ( 1) 00:28:32.608 979.139 - 983.040: 0.0033% ( 1) 00:28:32.608 1006.446 - 1014.248: 0.0065% ( 2) 00:28:32.608 1014.248 - 1022.050: 0.0114% ( 3) 00:28:32.608 1022.050 - 1029.851: 0.0163% ( 3) 00:28:32.608 1029.851 - 1037.653: 0.0229% ( 4) 00:28:32.608 1037.653 - 1045.455: 0.0310% ( 5) 00:28:32.608 1045.455 - 1053.257: 0.0327% ( 1) 00:28:32.608 1053.257 - 1061.059: 0.0392% ( 4) 00:28:32.608 1061.059 - 1068.861: 0.0408% ( 1) 00:28:32.608 1068.861 - 1076.663: 0.0457% ( 3) 00:28:32.608 1076.663 - 1084.465: 0.0523% ( 4) 00:28:32.608 1084.465 - 1092.267: 0.0572% ( 3) 00:28:32.608 1092.267 - 1100.069: 0.0604% ( 2) 00:28:32.608 1100.069 - 1107.870: 0.0670% ( 4) 00:28:32.608 1107.870 - 1115.672: 0.0751% ( 5) 00:28:32.608 1115.672 - 1123.474: 0.0768% ( 1) 00:28:32.608 1131.276 - 1139.078: 0.0882% ( 7) 00:28:32.608 1139.078 - 1146.880: 0.0980% ( 6) 00:28:32.608 1146.880 - 1154.682: 0.1078% ( 6) 00:28:32.608 1154.682 - 1162.484: 0.1192% ( 7) 00:28:32.608 1162.484 - 1170.286: 0.1241% ( 3) 00:28:32.608 1170.286 - 1178.088: 0.1372% ( 8) 00:28:32.608 1178.088 - 1185.890: 0.1421% ( 3) 00:28:32.608 1185.890 - 1193.691: 0.1519% ( 6) 00:28:32.608 1193.691 - 1201.493: 0.1600% ( 5) 00:28:32.608 1201.493 - 1209.295: 0.1698% ( 6) 00:28:32.608 1209.295 - 1217.097: 0.1813% ( 7) 00:28:32.608 1217.097 - 1224.899: 0.1845% ( 2) 00:28:32.608 1224.899 - 1232.701: 0.1894% ( 3) 00:28:32.608 1232.701 - 1240.503: 0.1943% ( 3) 00:28:32.608 1240.503 - 1248.305: 0.2107% ( 10) 00:28:32.608 1248.305 - 1256.107: 0.2188% ( 5) 00:28:32.608 1256.107 - 1263.909: 0.2352% ( 10) 00:28:32.608 1263.909 - 1271.710: 0.2450% ( 6) 00:28:32.608 1271.710 - 1279.512: 0.2580% ( 8) 00:28:32.608 1279.512 - 1287.314: 0.2646% ( 4) 00:28:32.608 1287.314 - 1295.116: 0.2711% ( 4) 00:28:32.608 1295.116 - 1302.918: 0.2842% ( 8) 00:28:32.608 1302.918 - 1310.720: 0.2923% ( 5) 00:28:32.608 1310.720 - 1318.522: 0.3054% ( 8) 00:28:32.608 1318.522 - 1326.324: 0.3136% ( 5) 00:28:32.608 1326.324 - 1334.126: 0.3266% ( 8) 00:28:32.608 1334.126 - 1341.928: 0.3381% ( 7) 00:28:32.608 1341.928 - 1349.730: 0.3413% ( 2) 00:28:32.608 1349.730 - 1357.531: 0.3511% ( 6) 00:28:32.608 1357.531 - 1365.333: 0.3609% ( 6) 00:28:32.608 1365.333 - 1373.135: 0.3740% ( 8) 00:28:32.608 1373.135 - 1380.937: 0.3854% ( 7) 00:28:32.608 1380.937 - 1388.739: 0.3985% ( 8) 00:28:32.608 1388.739 - 1396.541: 0.4213% ( 14) 00:28:32.608 1396.541 - 1404.343: 0.4377% ( 10) 00:28:32.608 1404.343 - 1412.145: 0.4671% ( 18) 00:28:32.608 1412.145 - 1419.947: 0.5046% ( 23) 00:28:32.608 1419.947 - 1427.749: 0.5569% ( 32) 00:28:32.608 1427.749 - 1435.550: 0.6124% ( 34) 00:28:32.608 1435.550 - 1443.352: 0.6941% ( 50) 00:28:32.608 1443.352 - 1451.154: 0.7692% ( 46) 00:28:32.608 1451.154 - 1458.956: 0.8656% ( 59) 00:28:32.608 1458.956 - 1466.758: 0.9978% ( 81) 00:28:32.608 1466.758 - 1474.560: 1.1432% ( 89) 00:28:32.608 1474.560 - 1482.362: 1.3359% ( 118) 00:28:32.608 1482.362 - 1490.164: 1.5123% ( 108) 00:28:32.608 1490.164 - 1497.966: 1.7458% ( 143) 00:28:32.608 1497.966 - 1505.768: 1.9679% ( 136) 00:28:32.608 1505.768 - 1513.570: 2.2178% ( 153) 00:28:32.608 1513.570 - 1521.371: 2.4677% ( 153) 00:28:32.608 1521.371 - 1529.173: 2.7290% ( 160) 00:28:32.608 1529.173 - 1536.975: 2.9984% ( 165) 00:28:32.608 1536.975 - 1544.777: 3.2597% ( 160) 00:28:32.608 1544.777 - 1552.579: 3.5553% ( 181) 00:28:32.608 1552.579 - 1560.381: 3.8689% ( 192) 00:28:32.608 1560.381 - 1568.183: 4.1351% ( 163) 00:28:32.608 1568.183 - 1575.985: 4.4585% ( 198) 00:28:32.608 1575.985 - 1583.787: 4.7508% ( 179) 00:28:32.608 1583.787 - 1591.589: 5.0709% ( 196) 00:28:32.608 1591.589 - 1599.390: 5.3926% ( 197) 00:28:32.608 1599.390 - 1607.192: 5.8123% ( 257) 00:28:32.608 1607.192 - 1614.994: 6.2712% ( 281) 00:28:32.608 1614.994 - 1622.796: 6.6191% ( 213) 00:28:32.608 1622.796 - 1630.598: 7.0421% ( 259) 00:28:32.608 1630.598 - 1638.400: 7.5434% ( 307) 00:28:32.608 1638.400 - 1646.202: 7.9860% ( 271) 00:28:32.608 1646.202 - 1654.004: 8.4106% ( 260) 00:28:32.608 1654.004 - 1661.806: 8.7650% ( 217) 00:28:32.608 1661.806 - 1669.608: 9.2762% ( 313) 00:28:32.608 1669.608 - 1677.410: 9.9262% ( 398) 00:28:32.608 1677.410 - 1685.211: 10.5729% ( 396) 00:28:32.608 1685.211 - 1693.013: 11.1331% ( 343) 00:28:32.608 1693.013 - 1700.815: 11.9072% ( 474) 00:28:32.608 1700.815 - 1708.617: 12.4641% ( 341) 00:28:32.608 1708.617 - 1716.419: 13.4423% ( 599) 00:28:32.608 1716.419 - 1724.221: 14.3928% ( 582) 00:28:32.608 1724.221 - 1732.023: 15.2780% ( 542) 00:28:32.608 1732.023 - 1739.825: 15.9018% ( 382) 00:28:32.608 1739.825 - 1747.627: 16.7086% ( 494) 00:28:32.608 1747.627 - 1755.429: 17.7897% ( 662) 00:28:32.608 1755.429 - 1763.230: 18.8741% ( 664) 00:28:32.608 1763.230 - 1771.032: 20.0565% ( 724) 00:28:32.608 1771.032 - 1778.834: 21.0054% ( 581) 00:28:32.608 1778.834 - 1786.636: 21.7435% ( 452) 00:28:32.608 1786.636 - 1794.438: 22.4817% ( 452) 00:28:32.608 1794.438 - 1802.240: 23.3897% ( 556) 00:28:32.608 1802.240 - 1810.042: 24.2667% ( 537) 00:28:32.608 1810.042 - 1817.844: 25.3185% ( 644) 00:28:32.608 1817.844 - 1825.646: 26.4274% ( 679) 00:28:32.608 1825.646 - 1833.448: 27.4693% ( 638) 00:28:32.608 1833.448 - 1841.250: 28.4802% ( 619) 00:28:32.608 1841.250 - 1849.051: 29.8700% ( 851) 00:28:32.608 1849.051 - 1856.853: 31.1193% ( 765) 00:28:32.608 1856.853 - 1864.655: 32.1972% ( 660) 00:28:32.608 1864.655 - 1872.457: 33.0579% ( 527) 00:28:32.608 1872.457 - 1880.259: 33.9169% ( 526) 00:28:32.608 1880.259 - 1888.061: 34.7253% ( 495) 00:28:32.608 1888.061 - 1895.863: 35.6154% ( 545) 00:28:32.608 1895.863 - 1903.665: 36.6655% ( 643) 00:28:32.608 1903.665 - 1911.467: 37.5065% ( 515) 00:28:32.608 1911.467 - 1919.269: 38.3639% ( 525) 00:28:32.608 1919.269 - 1927.070: 39.1054% ( 454) 00:28:32.608 1927.070 - 1934.872: 39.7554% ( 398) 00:28:32.608 1934.872 - 1942.674: 40.8022% ( 641) 00:28:32.608 1942.674 - 1950.476: 41.5812% ( 477) 00:28:32.608 1950.476 - 1958.278: 42.5333% ( 583) 00:28:32.608 1958.278 - 1966.080: 43.4560% ( 565) 00:28:32.608 1966.080 - 1973.882: 44.3004% ( 517) 00:28:32.608 1973.882 - 1981.684: 45.0696% ( 471) 00:28:32.608 1981.684 - 1989.486: 46.0152% ( 579) 00:28:32.608 1989.486 - 1997.288: 46.9477% ( 571) 00:28:32.608 1997.288 - 2012.891: 49.1557% ( 1352) 00:28:32.608 2012.891 - 2028.495: 51.3555% ( 1347) 00:28:32.608 2028.495 - 2044.099: 53.2483% ( 1159) 00:28:32.608 2044.099 - 2059.703: 54.7459% ( 917) 00:28:32.608 2059.703 - 2075.307: 56.2565% ( 925) 00:28:32.608 2075.307 - 2090.910: 57.7884% ( 938) 00:28:32.608 2090.910 - 2106.514: 59.3383% ( 949) 00:28:32.608 2106.514 - 2122.118: 60.8767% ( 942) 00:28:32.608 2122.118 - 2137.722: 62.1309% ( 768) 00:28:32.608 2137.722 - 2153.326: 63.4831% ( 828) 00:28:32.608 2153.326 - 2168.930: 64.9203% ( 880) 00:28:32.608 2168.930 - 2184.533: 66.1648% ( 762) 00:28:32.608 2184.533 - 2200.137: 67.5333% ( 838) 00:28:32.608 2200.137 - 2215.741: 68.6569% ( 688) 00:28:32.608 2215.741 - 2231.345: 69.6580% ( 613) 00:28:32.608 2231.345 - 2246.949: 70.7833% ( 689) 00:28:32.608 2246.949 - 2262.552: 71.8693% ( 665) 00:28:32.608 2262.552 - 2278.156: 73.1203% ( 766) 00:28:32.608 2278.156 - 2293.760: 74.3010% ( 723) 00:28:32.608 2293.760 - 2309.364: 75.3626% ( 650) 00:28:32.608 2309.364 - 2324.968: 76.3620% ( 612) 00:28:32.608 2324.968 - 2340.571: 77.4922% ( 692) 00:28:32.608 2340.571 - 2356.175: 78.5717% ( 661) 00:28:32.608 2356.175 - 2371.779: 79.4976% ( 567) 00:28:32.608 2371.779 - 2387.383: 80.3632% ( 530) 00:28:32.608 2387.383 - 2402.987: 81.2386% ( 536) 00:28:32.608 2402.987 - 2418.590: 82.2037% ( 591) 00:28:32.608 2418.590 - 2434.194: 83.1461% ( 577) 00:28:32.608 2434.194 - 2449.798: 83.9871% ( 515) 00:28:32.608 2449.798 - 2465.402: 84.8037% ( 500) 00:28:32.608 2465.402 - 2481.006: 85.5386% ( 450) 00:28:32.608 2481.006 - 2496.610: 86.2980% ( 465) 00:28:32.608 2496.610 - 2512.213: 86.9660% ( 409) 00:28:32.608 2512.213 - 2527.817: 87.7090% ( 455) 00:28:32.608 2527.817 - 2543.421: 88.3264% ( 378) 00:28:32.608 2543.421 - 2559.025: 88.9813% ( 401) 00:28:32.608 2559.025 - 2574.629: 89.5920% ( 374) 00:28:32.608 2574.629 - 2590.232: 90.1881% ( 365) 00:28:32.608 2590.232 - 2605.836: 90.6911% ( 308) 00:28:32.608 2605.836 - 2621.440: 91.1223% ( 264) 00:28:32.608 2621.440 - 2637.044: 91.5289% ( 249) 00:28:32.608 2637.044 - 2652.648: 91.9176% ( 238) 00:28:32.608 2652.648 - 2668.251: 92.2998% ( 234) 00:28:32.608 2668.251 - 2683.855: 92.6558% ( 218) 00:28:32.608 2683.855 - 2699.459: 92.9824% ( 200) 00:28:32.608 2699.459 - 2715.063: 93.3009% ( 195) 00:28:32.608 2715.063 - 2730.667: 93.5802% ( 171) 00:28:32.608 2730.667 - 2746.270: 93.8366% ( 157) 00:28:32.608 2746.270 - 2761.874: 94.0848% ( 152) 00:28:32.608 2761.874 - 2777.478: 94.3216% ( 145) 00:28:32.608 2777.478 - 2793.082: 94.5388% ( 133) 00:28:32.608 2793.082 - 2808.686: 94.7495% ( 129) 00:28:32.608 2808.686 - 2824.290: 94.9553% ( 126) 00:28:32.608 2824.290 - 2839.893: 95.1578% ( 124) 00:28:32.608 2839.893 - 2855.497: 95.3505% ( 118) 00:28:32.608 2855.497 - 2871.101: 95.5448% ( 119) 00:28:32.608 2871.101 - 2886.705: 95.7294% ( 113) 00:28:32.608 2886.705 - 2902.309: 95.9008% ( 105) 00:28:32.608 2902.309 - 2917.912: 96.0690% ( 103) 00:28:32.608 2917.912 - 2933.516: 96.2373% ( 103) 00:28:32.608 2933.516 - 2949.120: 96.3908% ( 94) 00:28:32.608 2949.120 - 2964.724: 96.5459% ( 95) 00:28:32.608 2964.724 - 2980.328: 96.7076% ( 99) 00:28:32.608 2980.328 - 2995.931: 96.8513% ( 88) 00:28:32.608 2995.931 - 3011.535: 97.0146% ( 100) 00:28:32.608 3011.535 - 3027.139: 97.2057% ( 117) 00:28:32.608 3027.139 - 3042.743: 97.3494% ( 88) 00:28:32.608 3042.743 - 3058.347: 97.4833% ( 82) 00:28:32.608 3058.347 - 3073.950: 97.6173% ( 82) 00:28:32.608 3073.950 - 3089.554: 97.7381% ( 74) 00:28:32.608 3089.554 - 3105.158: 97.8622% ( 76) 00:28:32.608 3105.158 - 3120.762: 97.9765% ( 70) 00:28:32.608 3120.762 - 3136.366: 98.0811% ( 64) 00:28:32.608 3136.366 - 3151.970: 98.1742% ( 57) 00:28:32.608 3151.970 - 3167.573: 98.2770% ( 63) 00:28:32.608 3167.573 - 3183.177: 98.3767% ( 61) 00:28:32.608 3183.177 - 3198.781: 98.4583% ( 50) 00:28:32.608 3198.781 - 3214.385: 98.5449% ( 53) 00:28:32.608 3214.385 - 3229.989: 98.6233% ( 48) 00:28:32.608 3229.989 - 3245.592: 98.7131% ( 55) 00:28:32.608 3245.592 - 3261.196: 98.8160% ( 63) 00:28:32.608 3261.196 - 3276.800: 98.8944% ( 48) 00:28:32.608 3276.800 - 3292.404: 99.0120% ( 72) 00:28:32.608 3292.404 - 3308.008: 99.0724% ( 37) 00:28:32.608 3308.008 - 3323.611: 99.1344% ( 38) 00:28:32.608 3323.611 - 3339.215: 99.1883% ( 33) 00:28:32.608 3339.215 - 3354.819: 99.2422% ( 33) 00:28:32.608 3354.819 - 3370.423: 99.2978% ( 34) 00:28:32.608 3370.423 - 3386.027: 99.3500% ( 32) 00:28:32.608 3386.027 - 3401.630: 99.3908% ( 25) 00:28:32.608 3401.630 - 3417.234: 99.4317% ( 25) 00:28:32.608 3417.234 - 3432.838: 99.4709% ( 24) 00:28:32.608 3432.838 - 3448.442: 99.5117% ( 25) 00:28:32.608 3448.442 - 3464.046: 99.5427% ( 19) 00:28:32.608 3464.046 - 3479.650: 99.5721% ( 18) 00:28:32.608 3479.650 - 3495.253: 99.5950% ( 14) 00:28:32.608 3495.253 - 3510.857: 99.6211% ( 16) 00:28:32.608 3510.857 - 3526.461: 99.6440% ( 14) 00:28:32.609 3526.461 - 3542.065: 99.6636% ( 12) 00:28:32.609 3542.065 - 3557.669: 99.6766% ( 8) 00:28:32.609 3557.669 - 3573.272: 99.6930% ( 10) 00:28:32.609 3573.272 - 3588.876: 99.7028% ( 6) 00:28:32.609 3588.876 - 3604.480: 99.7126% ( 6) 00:28:32.609 3604.480 - 3620.084: 99.7191% ( 4) 00:28:32.609 3620.084 - 3635.688: 99.7273% ( 5) 00:28:32.609 3635.688 - 3651.291: 99.7354% ( 5) 00:28:32.609 3651.291 - 3666.895: 99.7420% ( 4) 00:28:32.609 3666.895 - 3682.499: 99.7469% ( 3) 00:28:32.609 3682.499 - 3698.103: 99.7501% ( 2) 00:28:32.609 3698.103 - 3713.707: 99.7534% ( 2) 00:28:32.609 3729.310 - 3744.914: 99.7550% ( 1) 00:28:32.609 3744.914 - 3760.518: 99.7567% ( 1) 00:28:32.609 3760.518 - 3776.122: 99.7583% ( 1) 00:28:32.609 3776.122 - 3791.726: 99.7599% ( 1) 00:28:32.609 3791.726 - 3807.330: 99.7616% ( 1) 00:28:32.609 3822.933 - 3838.537: 99.7632% ( 1) 00:28:32.609 3838.537 - 3854.141: 99.7648% ( 1) 00:28:32.609 3869.745 - 3885.349: 99.7697% ( 3) 00:28:32.609 3885.349 - 3900.952: 99.7746% ( 3) 00:28:32.609 3900.952 - 3916.556: 99.7779% ( 2) 00:28:32.609 3916.556 - 3932.160: 99.7812% ( 2) 00:28:32.609 3932.160 - 3947.764: 99.7844% ( 2) 00:28:32.609 3947.764 - 3963.368: 99.7877% ( 2) 00:28:32.609 3963.368 - 3978.971: 99.7975% ( 6) 00:28:32.609 3978.971 - 3994.575: 99.8057% ( 5) 00:28:32.609 3994.575 - 4025.783: 99.8122% ( 4) 00:28:32.609 4025.783 - 4056.990: 99.8204% ( 5) 00:28:32.609 4056.990 - 4088.198: 99.8367% ( 10) 00:28:32.609 4088.198 - 4119.406: 99.8498% ( 8) 00:28:32.609 4119.406 - 4150.613: 99.8644% ( 9) 00:28:32.609 4150.613 - 4181.821: 99.8922% ( 17) 00:28:32.609 4181.821 - 4213.029: 99.8971% ( 3) 00:28:32.609 4213.029 - 4244.236: 99.9020% ( 3) 00:28:32.609 4244.236 - 4275.444: 99.9036% ( 1) 00:28:32.609 4275.444 - 4306.651: 99.9069% ( 2) 00:28:32.609 4306.651 - 4337.859: 99.9085% ( 1) 00:28:32.609 4337.859 - 4369.067: 99.9118% ( 2) 00:28:32.609 4369.067 - 4400.274: 99.9134% ( 1) 00:28:32.609 4400.274 - 4431.482: 99.9151% ( 1) 00:28:32.609 4431.482 - 4462.690: 99.9183% ( 2) 00:28:32.609 4462.690 - 4493.897: 99.9200% ( 1) 00:28:32.609 4493.897 - 4525.105: 99.9216% ( 1) 00:28:32.609 4525.105 - 4556.312: 99.9232% ( 1) 00:28:32.609 4556.312 - 4587.520: 99.9265% ( 2) 00:28:32.609 4587.520 - 4618.728: 99.9281% ( 1) 00:28:32.609 4618.728 - 4649.935: 99.9298% ( 1) 00:28:32.609 4649.935 - 4681.143: 99.9330% ( 2) 00:28:32.609 4681.143 - 4712.350: 99.9347% ( 1) 00:28:32.609 4712.350 - 4743.558: 99.9379% ( 2) 00:28:32.609 4743.558 - 4774.766: 99.9396% ( 1) 00:28:32.609 4774.766 - 4805.973: 99.9412% ( 1) 00:28:32.609 4805.973 - 4837.181: 99.9428% ( 1) 00:28:32.609 4837.181 - 4868.389: 99.9445% ( 1) 00:28:32.609 4868.389 - 4899.596: 99.9477% ( 2) 00:28:32.609 4899.596 - 4930.804: 99.9494% ( 1) 00:28:32.609 4930.804 - 4962.011: 99.9526% ( 2) 00:28:32.609 4962.011 - 4993.219: 99.9543% ( 1) 00:28:32.609 4993.219 - 5024.427: 99.9575% ( 2) 00:28:32.609 5024.427 - 5055.634: 99.9592% ( 1) 00:28:32.609 5055.634 - 5086.842: 99.9608% ( 1) 00:28:32.609 5086.842 - 5118.050: 99.9641% ( 2) 00:28:32.609 5118.050 - 5149.257: 99.9673% ( 2) 00:28:32.609 5149.257 - 5180.465: 99.9690% ( 1) 00:28:32.609 5180.465 - 5211.672: 99.9706% ( 1) 00:28:32.609 5211.672 - 5242.880: 99.9739% ( 2) 00:28:32.609 5274.088 - 5305.295: 99.9755% ( 1) 00:28:32.609 5305.295 - 5336.503: 99.9771% ( 1) 00:28:32.609 5336.503 - 5367.710: 99.9788% ( 1) 00:28:32.609 7146.545 - 7177.752: 99.9804% ( 1) 00:28:32.609 7208.960 - 7240.168: 99.9820% ( 1) 00:28:32.609 8613.303 - 8675.718: 99.9837% ( 1) 00:28:32.609 8987.794 - 9050.210: 99.9853% ( 1) 00:28:32.609 9050.210 - 9112.625: 99.9886% ( 2) 00:28:32.609 10111.269 - 10173.684: 99.9902% ( 1) 00:28:32.609 10173.684 - 10236.099: 99.9918% ( 1) 00:28:32.609 10236.099 - 10298.514: 99.9951% ( 2) 00:28:32.609 10298.514 - 10360.930: 99.9984% ( 2) 00:28:32.609 10360.930 - 10423.345: 100.0000% ( 1) 00:28:32.609 00:28:32.609 16:46:03 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:28:32.609 00:28:32.609 real 0m2.644s 00:28:32.609 user 0m2.231s 00:28:32.609 sys 0m0.258s 00:28:32.609 16:46:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.609 16:46:03 -- common/autotest_common.sh@10 -- # set +x 00:28:32.609 ************************************ 00:28:32.609 END TEST nvme_perf 00:28:32.609 ************************************ 00:28:32.609 16:46:03 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:28:32.609 16:46:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:28:32.609 16:46:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:32.609 16:46:03 -- common/autotest_common.sh@10 -- # set +x 00:28:32.609 ************************************ 00:28:32.609 START TEST nvme_hello_world 00:28:32.609 ************************************ 00:28:32.609 16:46:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:28:32.866 Initializing NVMe Controllers 00:28:32.866 Attached to 0000:00:06.0 00:28:32.866 Namespace ID: 1 size: 5GB 00:28:32.866 Initialization complete. 00:28:32.866 INFO: using host memory buffer for IO 00:28:32.866 Hello world! 00:28:32.866 00:28:32.866 real 0m0.287s 00:28:32.866 user 0m0.087s 00:28:32.866 sys 0m0.103s 00:28:32.866 16:46:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.866 16:46:04 -- common/autotest_common.sh@10 -- # set +x 00:28:32.866 ************************************ 00:28:32.866 END TEST nvme_hello_world 00:28:32.866 ************************************ 00:28:32.866 16:46:04 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:28:32.866 16:46:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:32.866 16:46:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:32.866 16:46:04 -- common/autotest_common.sh@10 -- # set +x 00:28:32.866 ************************************ 00:28:32.866 START TEST nvme_sgl 00:28:32.866 ************************************ 00:28:32.866 16:46:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:28:33.123 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:28:33.123 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:28:33.123 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:28:33.123 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:28:33.123 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:28:33.123 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:28:33.123 NVMe Readv/Writev Request test 00:28:33.123 Attached to 0000:00:06.0 00:28:33.123 0000:00:06.0: build_io_request_2 test passed 00:28:33.123 0000:00:06.0: build_io_request_4 test passed 00:28:33.123 0000:00:06.0: build_io_request_5 test passed 00:28:33.123 0000:00:06.0: build_io_request_6 test passed 00:28:33.123 0000:00:06.0: build_io_request_7 test passed 00:28:33.123 0000:00:06.0: build_io_request_10 test passed 00:28:33.123 Cleaning up... 00:28:33.123 00:28:33.123 real 0m0.309s 00:28:33.123 user 0m0.133s 00:28:33.123 sys 0m0.090s 00:28:33.123 16:46:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.123 16:46:04 -- common/autotest_common.sh@10 -- # set +x 00:28:33.123 ************************************ 00:28:33.123 END TEST nvme_sgl 00:28:33.123 ************************************ 00:28:33.123 16:46:04 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:28:33.123 16:46:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:33.123 16:46:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:33.123 16:46:04 -- common/autotest_common.sh@10 -- # set +x 00:28:33.123 ************************************ 00:28:33.123 START TEST nvme_e2edp 00:28:33.123 ************************************ 00:28:33.123 16:46:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:28:33.688 NVMe Write/Read with End-to-End data protection test 00:28:33.688 Attached to 0000:00:06.0 00:28:33.688 Cleaning up... 00:28:33.688 00:28:33.688 real 0m0.291s 00:28:33.688 user 0m0.092s 00:28:33.688 sys 0m0.135s 00:28:33.688 16:46:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.688 16:46:04 -- common/autotest_common.sh@10 -- # set +x 00:28:33.688 ************************************ 00:28:33.688 END TEST nvme_e2edp 00:28:33.688 ************************************ 00:28:33.688 16:46:04 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:28:33.688 16:46:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:33.688 16:46:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:33.688 16:46:04 -- common/autotest_common.sh@10 -- # set +x 00:28:33.688 ************************************ 00:28:33.688 START TEST nvme_reserve 00:28:33.688 ************************************ 00:28:33.688 16:46:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:28:33.945 ===================================================== 00:28:33.945 NVMe Controller at PCI bus 0, device 6, function 0 00:28:33.945 ===================================================== 00:28:33.946 Reservations: Not Supported 00:28:33.946 Reservation test passed 00:28:33.946 00:28:33.946 real 0m0.296s 00:28:33.946 user 0m0.089s 00:28:33.946 sys 0m0.138s 00:28:33.946 16:46:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.946 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:33.946 ************************************ 00:28:33.946 END TEST nvme_reserve 00:28:33.946 ************************************ 00:28:33.946 16:46:05 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:28:33.946 16:46:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:33.946 16:46:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:33.946 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:33.946 ************************************ 00:28:33.946 START TEST nvme_err_injection 00:28:33.946 ************************************ 00:28:33.946 16:46:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:28:34.204 NVMe Error Injection test 00:28:34.204 Attached to 0000:00:06.0 00:28:34.204 0000:00:06.0: get features failed as expected 00:28:34.204 0000:00:06.0: get features successfully as expected 00:28:34.204 0000:00:06.0: read failed as expected 00:28:34.204 0000:00:06.0: read successfully as expected 00:28:34.204 Cleaning up... 00:28:34.204 00:28:34.204 real 0m0.315s 00:28:34.204 user 0m0.069s 00:28:34.204 sys 0m0.169s 00:28:34.204 16:46:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:34.204 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:34.204 ************************************ 00:28:34.204 END TEST nvme_err_injection 00:28:34.204 ************************************ 00:28:34.204 16:46:05 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:28:34.204 16:46:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:28:34.204 16:46:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:34.204 16:46:05 -- common/autotest_common.sh@10 -- # set +x 00:28:34.204 ************************************ 00:28:34.204 START TEST nvme_overhead 00:28:34.204 ************************************ 00:28:34.204 16:46:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:28:35.584 Initializing NVMe Controllers 00:28:35.584 Attached to 0000:00:06.0 00:28:35.584 Initialization complete. Launching workers. 00:28:35.584 submit (in ns) avg, min, max = 13234.0, 11429.5, 104990.5 00:28:35.584 complete (in ns) avg, min, max = 7994.8, 7662.9, 185028.6 00:28:35.584 00:28:35.584 Submit histogram 00:28:35.584 ================ 00:28:35.584 Range in us Cumulative Count 00:28:35.584 11.398 - 11.459: 0.0137% ( 1) 00:28:35.584 11.459 - 11.520: 0.0274% ( 1) 00:28:35.584 11.581 - 11.642: 0.0411% ( 1) 00:28:35.584 11.703 - 11.764: 0.0685% ( 2) 00:28:35.584 11.764 - 11.825: 0.0822% ( 1) 00:28:35.584 12.495 - 12.556: 0.1096% ( 2) 00:28:35.584 12.556 - 12.617: 0.1919% ( 6) 00:28:35.584 12.617 - 12.678: 0.3015% ( 8) 00:28:35.584 12.678 - 12.739: 0.8224% ( 38) 00:28:35.584 12.739 - 12.800: 2.8509% ( 148) 00:28:35.584 12.800 - 12.861: 7.4561% ( 336) 00:28:35.584 12.861 - 12.922: 15.8032% ( 609) 00:28:35.584 12.922 - 12.983: 26.5762% ( 786) 00:28:35.584 12.983 - 13.044: 38.8158% ( 893) 00:28:35.584 13.044 - 13.105: 50.7538% ( 871) 00:28:35.584 13.105 - 13.166: 61.4035% ( 777) 00:28:35.584 13.166 - 13.227: 70.4770% ( 662) 00:28:35.584 13.227 - 13.288: 77.3300% ( 500) 00:28:35.584 13.288 - 13.349: 82.6206% ( 386) 00:28:35.584 13.349 - 13.410: 86.8969% ( 312) 00:28:35.584 13.410 - 13.470: 90.5702% ( 268) 00:28:35.584 13.470 - 13.531: 92.8454% ( 166) 00:28:35.584 13.531 - 13.592: 94.2708% ( 104) 00:28:35.584 13.592 - 13.653: 95.1343% ( 63) 00:28:35.584 13.653 - 13.714: 95.6414% ( 37) 00:28:35.584 13.714 - 13.775: 96.1486% ( 37) 00:28:35.584 13.775 - 13.836: 96.6420% ( 36) 00:28:35.584 13.836 - 13.897: 96.9984% ( 26) 00:28:35.584 13.897 - 13.958: 97.2862% ( 21) 00:28:35.584 13.958 - 14.019: 97.5055% ( 16) 00:28:35.584 14.019 - 14.080: 97.7248% ( 16) 00:28:35.584 14.080 - 14.141: 97.9578% ( 17) 00:28:35.584 14.141 - 14.202: 98.0400% ( 6) 00:28:35.584 14.202 - 14.263: 98.1497% ( 8) 00:28:35.584 14.263 - 14.324: 98.2867% ( 10) 00:28:35.584 14.324 - 14.385: 98.3553% ( 5) 00:28:35.584 14.385 - 14.446: 98.3964% ( 3) 00:28:35.584 14.446 - 14.507: 98.4512% ( 4) 00:28:35.584 14.507 - 14.568: 98.5197% ( 5) 00:28:35.584 14.750 - 14.811: 98.5334% ( 1) 00:28:35.584 14.811 - 14.872: 98.5471% ( 1) 00:28:35.584 14.994 - 15.055: 98.5746% ( 2) 00:28:35.584 15.055 - 15.116: 98.5883% ( 1) 00:28:35.584 15.177 - 15.238: 98.6157% ( 2) 00:28:35.584 15.299 - 15.360: 98.6431% ( 2) 00:28:35.584 15.360 - 15.421: 98.6568% ( 1) 00:28:35.584 15.421 - 15.482: 98.6842% ( 2) 00:28:35.584 15.482 - 15.543: 98.6979% ( 1) 00:28:35.584 15.543 - 15.604: 98.7116% ( 1) 00:28:35.584 15.604 - 15.726: 98.7390% ( 2) 00:28:35.584 15.726 - 15.848: 98.7527% ( 1) 00:28:35.584 15.848 - 15.970: 98.7802% ( 2) 00:28:35.584 15.970 - 16.091: 98.7939% ( 1) 00:28:35.584 16.091 - 16.213: 98.8487% ( 4) 00:28:35.584 16.213 - 16.335: 98.9172% ( 5) 00:28:35.584 16.579 - 16.701: 98.9446% ( 2) 00:28:35.584 16.823 - 16.945: 98.9857% ( 3) 00:28:35.584 16.945 - 17.067: 99.0269% ( 3) 00:28:35.584 17.067 - 17.189: 99.0817% ( 4) 00:28:35.584 17.189 - 17.310: 99.0954% ( 1) 00:28:35.584 17.310 - 17.432: 99.1091% ( 1) 00:28:35.584 17.432 - 17.554: 99.1228% ( 1) 00:28:35.584 17.554 - 17.676: 99.1776% ( 4) 00:28:35.584 17.676 - 17.798: 99.1913% ( 1) 00:28:35.584 17.798 - 17.920: 99.2188% ( 2) 00:28:35.584 17.920 - 18.042: 99.3010% ( 6) 00:28:35.584 18.042 - 18.164: 99.3284% ( 2) 00:28:35.584 18.286 - 18.408: 99.3969% ( 5) 00:28:35.584 18.408 - 18.530: 99.4380% ( 3) 00:28:35.584 18.530 - 18.651: 99.4792% ( 3) 00:28:35.584 18.651 - 18.773: 99.5203% ( 3) 00:28:35.584 18.773 - 18.895: 99.5888% ( 5) 00:28:35.584 18.895 - 19.017: 99.6299% ( 3) 00:28:35.584 19.017 - 19.139: 99.6573% ( 2) 00:28:35.584 19.139 - 19.261: 99.6711% ( 1) 00:28:35.584 19.261 - 19.383: 99.6848% ( 1) 00:28:35.584 19.627 - 19.749: 99.6985% ( 1) 00:28:35.584 19.870 - 19.992: 99.7259% ( 2) 00:28:35.584 20.114 - 20.236: 99.7670% ( 3) 00:28:35.584 20.724 - 20.846: 99.7807% ( 1) 00:28:35.584 21.455 - 21.577: 99.7944% ( 1) 00:28:35.584 21.821 - 21.943: 99.8081% ( 1) 00:28:35.584 22.065 - 22.187: 99.8218% ( 1) 00:28:35.584 24.747 - 24.869: 99.8355% ( 1) 00:28:35.584 25.112 - 25.234: 99.8492% ( 1) 00:28:35.584 25.478 - 25.600: 99.8766% ( 2) 00:28:35.584 25.600 - 25.722: 99.9041% ( 2) 00:28:35.584 25.844 - 25.966: 99.9452% ( 3) 00:28:35.584 26.331 - 26.453: 99.9589% ( 1) 00:28:35.584 35.596 - 35.840: 99.9726% ( 1) 00:28:35.584 57.539 - 57.783: 99.9863% ( 1) 00:28:35.584 104.838 - 105.326: 100.0000% ( 1) 00:28:35.585 00:28:35.585 Complete histogram 00:28:35.585 ================== 00:28:35.585 Range in us Cumulative Count 00:28:35.585 7.650 - 7.680: 0.0137% ( 1) 00:28:35.585 7.680 - 7.710: 0.0685% ( 4) 00:28:35.585 7.710 - 7.741: 0.3152% ( 18) 00:28:35.585 7.741 - 7.771: 1.5488% ( 90) 00:28:35.585 7.771 - 7.802: 5.7292% ( 305) 00:28:35.585 7.802 - 7.863: 39.1173% ( 2436) 00:28:35.585 7.863 - 7.924: 73.6979% ( 2523) 00:28:35.585 7.924 - 7.985: 85.7182% ( 877) 00:28:35.585 7.985 - 8.046: 91.3240% ( 409) 00:28:35.585 8.046 - 8.107: 94.8465% ( 257) 00:28:35.585 8.107 - 8.168: 96.5461% ( 124) 00:28:35.585 8.168 - 8.229: 97.3410% ( 58) 00:28:35.585 8.229 - 8.290: 97.5740% ( 17) 00:28:35.585 8.290 - 8.350: 97.8481% ( 20) 00:28:35.585 8.350 - 8.411: 97.9578% ( 8) 00:28:35.585 8.411 - 8.472: 98.0400% ( 6) 00:28:35.585 8.472 - 8.533: 98.1771% ( 10) 00:28:35.585 8.533 - 8.594: 98.4375% ( 19) 00:28:35.585 8.594 - 8.655: 98.6294% ( 14) 00:28:35.585 8.655 - 8.716: 98.7116% ( 6) 00:28:35.585 8.716 - 8.777: 98.9172% ( 15) 00:28:35.585 8.777 - 8.838: 98.9857% ( 5) 00:28:35.585 8.838 - 8.899: 99.0269% ( 3) 00:28:35.585 8.960 - 9.021: 99.0543% ( 2) 00:28:35.585 9.021 - 9.082: 99.0817% ( 2) 00:28:35.585 9.082 - 9.143: 99.0954% ( 1) 00:28:35.585 9.143 - 9.204: 99.1091% ( 1) 00:28:35.585 9.204 - 9.265: 99.1228% ( 1) 00:28:35.585 9.265 - 9.326: 99.1365% ( 1) 00:28:35.585 9.326 - 9.387: 99.1639% ( 2) 00:28:35.585 9.387 - 9.448: 99.2325% ( 5) 00:28:35.585 9.448 - 9.509: 99.2599% ( 2) 00:28:35.585 9.570 - 9.630: 99.2736% ( 1) 00:28:35.585 10.057 - 10.118: 99.2873% ( 1) 00:28:35.585 10.545 - 10.606: 99.3010% ( 1) 00:28:35.585 10.728 - 10.789: 99.3147% ( 1) 00:28:35.585 10.789 - 10.850: 99.3421% ( 2) 00:28:35.585 11.032 - 11.093: 99.3558% ( 1) 00:28:35.585 11.154 - 11.215: 99.3695% ( 1) 00:28:35.585 11.215 - 11.276: 99.3969% ( 2) 00:28:35.585 11.337 - 11.398: 99.4106% ( 1) 00:28:35.585 11.703 - 11.764: 99.4243% ( 1) 00:28:35.585 12.251 - 12.312: 99.4518% ( 2) 00:28:35.585 12.556 - 12.617: 99.4792% ( 2) 00:28:35.585 12.678 - 12.739: 99.4929% ( 1) 00:28:35.585 12.739 - 12.800: 99.5066% ( 1) 00:28:35.585 12.922 - 12.983: 99.5203% ( 1) 00:28:35.585 12.983 - 13.044: 99.5340% ( 1) 00:28:35.585 13.044 - 13.105: 99.5614% ( 2) 00:28:35.585 13.105 - 13.166: 99.5751% ( 1) 00:28:35.585 13.166 - 13.227: 99.6162% ( 3) 00:28:35.585 13.288 - 13.349: 99.6573% ( 3) 00:28:35.585 13.349 - 13.410: 99.6711% ( 1) 00:28:35.585 13.531 - 13.592: 99.6848% ( 1) 00:28:35.585 13.714 - 13.775: 99.6985% ( 1) 00:28:35.585 13.958 - 14.019: 99.7122% ( 1) 00:28:35.585 14.019 - 14.080: 99.7533% ( 3) 00:28:35.585 14.324 - 14.385: 99.7670% ( 1) 00:28:35.585 14.385 - 14.446: 99.7807% ( 1) 00:28:35.585 14.446 - 14.507: 99.8081% ( 2) 00:28:35.585 14.690 - 14.750: 99.8218% ( 1) 00:28:35.585 15.421 - 15.482: 99.8355% ( 1) 00:28:35.585 16.091 - 16.213: 99.8629% ( 2) 00:28:35.585 16.701 - 16.823: 99.8766% ( 1) 00:28:35.585 20.114 - 20.236: 99.8904% ( 1) 00:28:35.585 20.358 - 20.480: 99.9041% ( 1) 00:28:35.585 20.602 - 20.724: 99.9178% ( 1) 00:28:35.585 26.941 - 27.063: 99.9452% ( 2) 00:28:35.585 33.646 - 33.890: 99.9589% ( 1) 00:28:35.585 51.688 - 51.931: 99.9726% ( 1) 00:28:35.585 102.400 - 102.888: 99.9863% ( 1) 00:28:35.585 184.320 - 185.295: 100.0000% ( 1) 00:28:35.585 00:28:35.585 00:28:35.585 real 0m1.310s 00:28:35.585 user 0m1.114s 00:28:35.585 sys 0m0.117s 00:28:35.585 16:46:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.585 16:46:06 -- common/autotest_common.sh@10 -- # set +x 00:28:35.585 ************************************ 00:28:35.585 END TEST nvme_overhead 00:28:35.585 ************************************ 00:28:35.585 16:46:07 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:28:35.585 16:46:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:28:35.585 16:46:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:35.585 16:46:07 -- common/autotest_common.sh@10 -- # set +x 00:28:35.585 ************************************ 00:28:35.585 START TEST nvme_arbitration 00:28:35.585 ************************************ 00:28:35.585 16:46:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:28:39.783 Initializing NVMe Controllers 00:28:39.783 Attached to 0000:00:06.0 00:28:39.783 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:28:39.783 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:28:39.783 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:28:39.783 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:28:39.783 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:28:39.783 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:28:39.783 Initialization complete. Launching workers. 00:28:39.783 Starting thread on core 1 with urgent priority queue 00:28:39.783 Starting thread on core 2 with urgent priority queue 00:28:39.783 Starting thread on core 0 with urgent priority queue 00:28:39.783 Starting thread on core 3 with urgent priority queue 00:28:39.783 QEMU NVMe Ctrl (12340 ) core 0: 7130.33 IO/s 14.02 secs/100000 ios 00:28:39.783 QEMU NVMe Ctrl (12340 ) core 1: 7179.00 IO/s 13.93 secs/100000 ios 00:28:39.783 QEMU NVMe Ctrl (12340 ) core 2: 3967.67 IO/s 25.20 secs/100000 ios 00:28:39.783 QEMU NVMe Ctrl (12340 ) core 3: 3844.33 IO/s 26.01 secs/100000 ios 00:28:39.783 ======================================================== 00:28:39.783 00:28:39.783 00:28:39.783 real 0m3.380s 00:28:39.783 user 0m9.194s 00:28:39.783 sys 0m0.144s 00:28:39.783 16:46:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.783 16:46:10 -- common/autotest_common.sh@10 -- # set +x 00:28:39.783 ************************************ 00:28:39.783 END TEST nvme_arbitration 00:28:39.783 ************************************ 00:28:39.783 16:46:10 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:28:39.783 16:46:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:28:39.784 16:46:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:39.784 16:46:10 -- common/autotest_common.sh@10 -- # set +x 00:28:39.784 ************************************ 00:28:39.784 START TEST nvme_single_aen 00:28:39.784 ************************************ 00:28:39.784 16:46:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:28:39.784 [2024-07-13 16:46:10.511723] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:39.784 [2024-07-13 16:46:10.511827] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.784 [2024-07-13 16:46:10.680198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:39.784 Asynchronous Event Request test 00:28:39.784 Attached to 0000:00:06.0 00:28:39.784 Reset controller to setup AER completions for this process 00:28:39.784 Registering asynchronous event callbacks... 00:28:39.784 Getting orig temperature thresholds of all controllers 00:28:39.784 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:39.784 Setting all controllers temperature threshold low to trigger AER 00:28:39.784 Waiting for all controllers temperature threshold to be set lower 00:28:39.784 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:39.784 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:28:39.784 Waiting for all controllers to trigger AER and reset threshold 00:28:39.784 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:39.784 Cleaning up... 00:28:39.784 00:28:39.784 real 0m0.250s 00:28:39.784 user 0m0.099s 00:28:39.784 sys 0m0.082s 00:28:39.784 16:46:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.784 16:46:10 -- common/autotest_common.sh@10 -- # set +x 00:28:39.784 ************************************ 00:28:39.784 END TEST nvme_single_aen 00:28:39.784 ************************************ 00:28:39.784 16:46:10 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:28:39.784 16:46:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:39.784 16:46:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:39.784 16:46:10 -- common/autotest_common.sh@10 -- # set +x 00:28:39.784 ************************************ 00:28:39.784 START TEST nvme_doorbell_aers 00:28:39.784 ************************************ 00:28:39.784 16:46:10 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:28:39.784 16:46:10 -- nvme/nvme.sh@70 -- # bdfs=() 00:28:39.784 16:46:10 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:28:39.784 16:46:10 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:28:39.784 16:46:10 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:28:39.784 16:46:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:39.784 16:46:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:39.784 16:46:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:39.784 16:46:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:39.784 16:46:10 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:39.784 16:46:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:39.784 16:46:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:28:39.784 16:46:10 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:28:39.784 16:46:10 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:28:39.784 [2024-07-13 16:46:11.094432] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148812) is not found. Dropping the request. 00:28:49.752 Executing: test_write_invalid_db 00:28:49.753 Waiting for AER completion... 00:28:49.753 Failure: test_write_invalid_db 00:28:49.753 00:28:49.753 Executing: test_invalid_db_write_overflow_sq 00:28:49.753 Waiting for AER completion... 00:28:49.753 Failure: test_invalid_db_write_overflow_sq 00:28:49.753 00:28:49.753 Executing: test_invalid_db_write_overflow_cq 00:28:49.753 Waiting for AER completion... 00:28:49.753 Failure: test_invalid_db_write_overflow_cq 00:28:49.753 00:28:49.753 00:28:49.753 real 0m10.122s 00:28:49.753 user 0m7.666s 00:28:49.753 sys 0m2.389s 00:28:49.753 16:46:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.753 16:46:20 -- common/autotest_common.sh@10 -- # set +x 00:28:49.753 ************************************ 00:28:49.753 END TEST nvme_doorbell_aers 00:28:49.753 ************************************ 00:28:49.753 16:46:20 -- nvme/nvme.sh@97 -- # uname 00:28:49.753 16:46:20 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:28:49.753 16:46:20 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:28:49.753 16:46:20 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:28:49.753 16:46:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:49.753 16:46:20 -- common/autotest_common.sh@10 -- # set +x 00:28:49.753 ************************************ 00:28:49.753 START TEST nvme_multi_aen 00:28:49.753 ************************************ 00:28:49.753 16:46:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:28:49.753 [2024-07-13 16:46:21.011895] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:49.753 [2024-07-13 16:46:21.012233] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.753 [2024-07-13 16:46:21.218097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:49.753 [2024-07-13 16:46:21.218167] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148812) is not found. Dropping the request. 00:28:49.753 [2024-07-13 16:46:21.218662] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148812) is not found. Dropping the request. 00:28:49.753 [2024-07-13 16:46:21.218797] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148812) is not found. Dropping the request. 00:28:50.010 Child process pid: 148999 00:28:50.010 [2024-07-13 16:46:21.223090] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:50.010 [2024-07-13 16:46:21.223266] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.268 [Child] Asynchronous Event Request test 00:28:50.268 [Child] Attached to 0000:00:06.0 00:28:50.268 [Child] Registering asynchronous event callbacks... 00:28:50.268 [Child] Getting orig temperature thresholds of all controllers 00:28:50.268 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:50.268 [Child] Waiting for all controllers to trigger AER and reset threshold 00:28:50.268 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:50.268 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:50.268 [Child] Cleaning up... 00:28:50.268 Asynchronous Event Request test 00:28:50.268 Attached to 0000:00:06.0 00:28:50.268 Reset controller to setup AER completions for this process 00:28:50.268 Registering asynchronous event callbacks... 00:28:50.268 Getting orig temperature thresholds of all controllers 00:28:50.268 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:50.268 Setting all controllers temperature threshold low to trigger AER 00:28:50.268 Waiting for all controllers temperature threshold to be set lower 00:28:50.268 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:50.268 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:28:50.268 Waiting for all controllers to trigger AER and reset threshold 00:28:50.268 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:50.268 Cleaning up... 00:28:50.268 00:28:50.268 real 0m0.619s 00:28:50.268 user 0m0.191s 00:28:50.268 sys 0m0.252s 00:28:50.268 16:46:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.268 16:46:21 -- common/autotest_common.sh@10 -- # set +x 00:28:50.268 ************************************ 00:28:50.268 END TEST nvme_multi_aen 00:28:50.268 ************************************ 00:28:50.268 16:46:21 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:28:50.268 16:46:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:28:50.268 16:46:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:50.268 16:46:21 -- common/autotest_common.sh@10 -- # set +x 00:28:50.268 ************************************ 00:28:50.268 START TEST nvme_startup 00:28:50.268 ************************************ 00:28:50.268 16:46:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:28:50.526 Initializing NVMe Controllers 00:28:50.526 Attached to 0000:00:06.0 00:28:50.526 Initialization complete. 00:28:50.526 Time used:182396.375 (us). 00:28:50.526 00:28:50.526 real 0m0.273s 00:28:50.526 user 0m0.074s 00:28:50.526 sys 0m0.144s 00:28:50.526 16:46:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.526 16:46:21 -- common/autotest_common.sh@10 -- # set +x 00:28:50.526 ************************************ 00:28:50.526 END TEST nvme_startup 00:28:50.526 ************************************ 00:28:50.526 16:46:21 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:28:50.526 16:46:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:50.526 16:46:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:50.526 16:46:21 -- common/autotest_common.sh@10 -- # set +x 00:28:50.786 ************************************ 00:28:50.786 START TEST nvme_multi_secondary 00:28:50.786 ************************************ 00:28:50.786 16:46:22 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:28:50.786 16:46:22 -- nvme/nvme.sh@52 -- # pid0=149064 00:28:50.786 16:46:22 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:28:50.786 16:46:22 -- nvme/nvme.sh@54 -- # pid1=149065 00:28:50.786 16:46:22 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:28:50.786 16:46:22 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:28:54.181 Initializing NVMe Controllers 00:28:54.181 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:54.181 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:28:54.181 Initialization complete. Launching workers. 00:28:54.181 ======================================================== 00:28:54.181 Latency(us) 00:28:54.181 Device Information : IOPS MiB/s Average min max 00:28:54.181 PCIE (0000:00:06.0) NSID 1 from core 1: 33664.00 131.50 475.01 171.87 1892.48 00:28:54.181 ======================================================== 00:28:54.181 Total : 33664.00 131.50 475.01 171.87 1892.48 00:28:54.181 00:28:54.439 Initializing NVMe Controllers 00:28:54.439 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:54.439 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:28:54.439 Initialization complete. Launching workers. 00:28:54.439 ======================================================== 00:28:54.439 Latency(us) 00:28:54.439 Device Information : IOPS MiB/s Average min max 00:28:54.439 PCIE (0000:00:06.0) NSID 1 from core 2: 14576.00 56.94 1097.49 172.41 20663.05 00:28:54.439 ======================================================== 00:28:54.439 Total : 14576.00 56.94 1097.49 172.41 20663.05 00:28:54.439 00:28:54.439 16:46:25 -- nvme/nvme.sh@56 -- # wait 149064 00:28:56.343 Initializing NVMe Controllers 00:28:56.343 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:56.343 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:56.343 Initialization complete. Launching workers. 00:28:56.343 ======================================================== 00:28:56.343 Latency(us) 00:28:56.343 Device Information : IOPS MiB/s Average min max 00:28:56.343 PCIE (0000:00:06.0) NSID 1 from core 0: 39623.99 154.78 403.50 155.03 1669.56 00:28:56.343 ======================================================== 00:28:56.343 Total : 39623.99 154.78 403.50 155.03 1669.56 00:28:56.343 00:28:56.343 16:46:27 -- nvme/nvme.sh@57 -- # wait 149065 00:28:56.343 16:46:27 -- nvme/nvme.sh@61 -- # pid0=149138 00:28:56.343 16:46:27 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:28:56.343 16:46:27 -- nvme/nvme.sh@63 -- # pid1=149139 00:28:56.343 16:46:27 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:28:56.343 16:46:27 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:28:59.623 Initializing NVMe Controllers 00:28:59.623 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:59.623 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:59.623 Initialization complete. Launching workers. 00:28:59.623 ======================================================== 00:28:59.623 Latency(us) 00:28:59.623 Device Information : IOPS MiB/s Average min max 00:28:59.623 PCIE (0000:00:06.0) NSID 1 from core 0: 33418.66 130.54 478.50 168.39 1389.53 00:28:59.623 ======================================================== 00:28:59.623 Total : 33418.66 130.54 478.50 168.39 1389.53 00:28:59.623 00:28:59.623 Initializing NVMe Controllers 00:28:59.623 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:59.623 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:28:59.623 Initialization complete. Launching workers. 00:28:59.623 ======================================================== 00:28:59.623 Latency(us) 00:28:59.623 Device Information : IOPS MiB/s Average min max 00:28:59.623 PCIE (0000:00:06.0) NSID 1 from core 1: 35093.33 137.08 455.62 165.22 1376.79 00:28:59.623 ======================================================== 00:28:59.623 Total : 35093.33 137.08 455.62 165.22 1376.79 00:28:59.623 00:29:01.531 Initializing NVMe Controllers 00:29:01.531 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:01.531 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:29:01.531 Initialization complete. Launching workers. 00:29:01.531 ======================================================== 00:29:01.531 Latency(us) 00:29:01.531 Device Information : IOPS MiB/s Average min max 00:29:01.531 PCIE (0000:00:06.0) NSID 1 from core 2: 16111.46 62.94 992.48 144.31 24512.04 00:29:01.531 ======================================================== 00:29:01.531 Total : 16111.46 62.94 992.48 144.31 24512.04 00:29:01.531 00:29:01.531 16:46:32 -- nvme/nvme.sh@65 -- # wait 149138 00:29:01.531 16:46:32 -- nvme/nvme.sh@66 -- # wait 149139 00:29:01.531 00:29:01.531 real 0m10.676s 00:29:01.531 user 0m18.593s 00:29:01.531 sys 0m0.909s 00:29:01.531 16:46:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.531 ************************************ 00:29:01.531 END TEST nvme_multi_secondary 00:29:01.531 ************************************ 00:29:01.531 16:46:32 -- common/autotest_common.sh@10 -- # set +x 00:29:01.531 16:46:32 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:29:01.531 16:46:32 -- nvme/nvme.sh@102 -- # kill_stub 00:29:01.531 16:46:32 -- common/autotest_common.sh@1065 -- # [[ -e /proc/148372 ]] 00:29:01.531 16:46:32 -- common/autotest_common.sh@1066 -- # kill 148372 00:29:01.531 16:46:32 -- common/autotest_common.sh@1067 -- # wait 148372 00:29:02.466 [2024-07-13 16:46:33.630665] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148998) is not found. Dropping the request. 00:29:02.466 [2024-07-13 16:46:33.630856] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148998) is not found. Dropping the request. 00:29:02.466 [2024-07-13 16:46:33.630952] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148998) is not found. Dropping the request. 00:29:02.466 [2024-07-13 16:46:33.631033] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148998) is not found. Dropping the request. 00:29:02.466 16:46:33 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:29:02.466 16:46:33 -- common/autotest_common.sh@1073 -- # echo 2 00:29:02.466 16:46:33 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:02.466 16:46:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:02.466 16:46:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:02.466 16:46:33 -- common/autotest_common.sh@10 -- # set +x 00:29:02.466 ************************************ 00:29:02.466 START TEST bdev_nvme_reset_stuck_adm_cmd 00:29:02.466 ************************************ 00:29:02.466 16:46:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:02.466 * Looking for test storage... 00:29:02.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:02.466 16:46:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:29:02.466 16:46:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:29:02.466 16:46:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:29:02.466 16:46:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:29:02.466 16:46:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:29:02.466 16:46:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:29:02.466 16:46:33 -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:02.466 16:46:33 -- common/autotest_common.sh@1509 -- # local bdfs 00:29:02.466 16:46:33 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:29:02.466 16:46:33 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:29:02.466 16:46:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:02.466 16:46:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:02.466 16:46:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:02.466 16:46:33 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:02.466 16:46:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:02.725 16:46:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:02.725 16:46:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:02.725 16:46:33 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:29:02.725 16:46:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:29:02.725 16:46:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:29:02.725 16:46:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=149304 00:29:02.725 16:46:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:02.725 16:46:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 149304 00:29:02.725 16:46:33 -- common/autotest_common.sh@819 -- # '[' -z 149304 ']' 00:29:02.725 16:46:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.725 16:46:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:02.725 16:46:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.725 16:46:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:02.725 16:46:33 -- common/autotest_common.sh@10 -- # set +x 00:29:02.725 16:46:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:29:02.725 [2024-07-13 16:46:34.050217] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:02.725 [2024-07-13 16:46:34.050709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149304 ] 00:29:02.984 [2024-07-13 16:46:34.254851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:02.984 [2024-07-13 16:46:34.367299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:02.984 [2024-07-13 16:46:34.368306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.984 [2024-07-13 16:46:34.368457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.984 [2024-07-13 16:46:34.368576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.984 [2024-07-13 16:46:34.368589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:03.552 16:46:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:03.552 16:46:34 -- common/autotest_common.sh@852 -- # return 0 00:29:03.552 16:46:34 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:29:03.552 16:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:03.552 16:46:34 -- common/autotest_common.sh@10 -- # set +x 00:29:03.811 nvme0n1 00:29:03.811 16:46:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:03.811 16:46:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:29:03.811 16:46:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_mFLxH.txt 00:29:03.811 16:46:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:29:03.811 16:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:03.811 16:46:35 -- common/autotest_common.sh@10 -- # set +x 00:29:03.811 true 00:29:03.811 16:46:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:03.811 16:46:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:29:03.811 16:46:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720889195 00:29:03.811 16:46:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=149329 00:29:03.811 16:46:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:03.811 16:46:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:29:03.811 16:46:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:05.717 16:46:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:05.717 16:46:37 -- common/autotest_common.sh@10 -- # set +x 00:29:05.717 [2024-07-13 16:46:37.068158] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:05.717 [2024-07-13 16:46:37.069035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:05.717 [2024-07-13 16:46:37.069254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:29:05.717 [2024-07-13 16:46:37.069434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.717 [2024-07-13 16:46:37.071673] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:05.717 16:46:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:05.717 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 149329 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 149329 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 149329 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.717 16:46:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:05.717 16:46:37 -- common/autotest_common.sh@10 -- # set +x 00:29:05.717 16:46:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_mFLxH.txt 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:05.717 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:05.718 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:29:05.718 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:05.718 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:05.718 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:29:05.718 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:29:05.718 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_mFLxH.txt 00:29:05.718 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 149304 00:29:05.718 16:46:37 -- common/autotest_common.sh@926 -- # '[' -z 149304 ']' 00:29:05.718 16:46:37 -- common/autotest_common.sh@930 -- # kill -0 149304 00:29:05.718 16:46:37 -- common/autotest_common.sh@931 -- # uname 00:29:05.977 16:46:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:05.977 16:46:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 149304 00:29:05.977 16:46:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:05.977 16:46:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:05.977 killing process with pid 149304 00:29:05.977 16:46:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 149304' 00:29:05.977 16:46:37 -- common/autotest_common.sh@945 -- # kill 149304 00:29:05.977 16:46:37 -- common/autotest_common.sh@950 -- # wait 149304 00:29:06.545 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:29:06.545 16:46:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:29:06.545 00:29:06.545 real 0m4.099s 00:29:06.545 user 0m14.009s 00:29:06.545 sys 0m0.781s 00:29:06.545 16:46:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.545 16:46:37 -- common/autotest_common.sh@10 -- # set +x 00:29:06.545 ************************************ 00:29:06.545 END TEST bdev_nvme_reset_stuck_adm_cmd 00:29:06.546 ************************************ 00:29:06.546 16:46:37 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:29:06.546 16:46:37 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:29:06.546 16:46:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:06.546 16:46:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:06.546 16:46:37 -- common/autotest_common.sh@10 -- # set +x 00:29:06.546 ************************************ 00:29:06.546 START TEST nvme_fio 00:29:06.546 ************************************ 00:29:06.546 16:46:37 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:29:06.546 16:46:37 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:29:06.546 16:46:37 -- nvme/nvme.sh@32 -- # ran_fio=false 00:29:06.546 16:46:37 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:29:06.546 16:46:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:06.546 16:46:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:06.546 16:46:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:06.546 16:46:37 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:06.546 16:46:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:06.806 16:46:38 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:06.806 16:46:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:06.806 16:46:38 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:29:06.806 16:46:38 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:29:06.806 16:46:38 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:29:06.806 16:46:38 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:06.806 16:46:38 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:29:07.063 16:46:38 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:07.064 16:46:38 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:29:07.064 16:46:38 -- nvme/nvme.sh@41 -- # bs=4096 00:29:07.064 16:46:38 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:07.064 16:46:38 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:07.064 16:46:38 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:07.064 16:46:38 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:07.064 16:46:38 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:07.064 16:46:38 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:07.064 16:46:38 -- common/autotest_common.sh@1320 -- # shift 00:29:07.064 16:46:38 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:07.064 16:46:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:07.322 16:46:38 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:07.322 16:46:38 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:07.322 16:46:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:07.322 16:46:38 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:29:07.322 16:46:38 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:29:07.322 16:46:38 -- common/autotest_common.sh@1326 -- # break 00:29:07.322 16:46:38 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:29:07.322 16:46:38 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:07.322 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:07.322 fio-3.35 00:29:07.322 Starting 1 thread 00:29:10.604 00:29:10.604 test: (groupid=0, jobs=1): err= 0: pid=149467: Sat Jul 13 16:46:41 2024 00:29:10.604 read: IOPS=19.8k, BW=77.4MiB/s (81.1MB/s)(155MiB/2001msec) 00:29:10.604 slat (usec): min=3, max=207, avg= 4.85, stdev= 2.71 00:29:10.604 clat (usec): min=248, max=7559, avg=3214.59, stdev=250.63 00:29:10.604 lat (usec): min=263, max=7713, avg=3219.44, stdev=250.86 00:29:10.604 clat percentiles (usec): 00:29:10.604 | 1.00th=[ 2737], 5.00th=[ 2933], 10.00th=[ 2999], 20.00th=[ 3064], 00:29:10.604 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3261], 00:29:10.604 | 70.00th=[ 3294], 80.00th=[ 3359], 90.00th=[ 3425], 95.00th=[ 3490], 00:29:10.604 | 99.00th=[ 3720], 99.50th=[ 4146], 99.90th=[ 5932], 99.95th=[ 6652], 00:29:10.604 | 99.99th=[ 7439] 00:29:10.604 bw ( KiB/s): min=78320, max=80696, per=100.00%, avg=79258.67, stdev=1264.06, samples=3 00:29:10.604 iops : min=19580, max=20174, avg=19814.67, stdev=316.01, samples=3 00:29:10.604 write: IOPS=19.8k, BW=77.2MiB/s (80.9MB/s)(154MiB/2001msec); 0 zone resets 00:29:10.604 slat (usec): min=3, max=164, avg= 5.17, stdev= 2.43 00:29:10.604 clat (usec): min=267, max=7460, avg=3235.74, stdev=250.17 00:29:10.604 lat (usec): min=272, max=7502, avg=3240.91, stdev=250.32 00:29:10.604 clat percentiles (usec): 00:29:10.604 | 1.00th=[ 2769], 5.00th=[ 2966], 10.00th=[ 2999], 20.00th=[ 3097], 00:29:10.604 | 30.00th=[ 3130], 40.00th=[ 3195], 50.00th=[ 3228], 60.00th=[ 3261], 00:29:10.604 | 70.00th=[ 3326], 80.00th=[ 3359], 90.00th=[ 3458], 95.00th=[ 3523], 00:29:10.604 | 99.00th=[ 3752], 99.50th=[ 4359], 99.90th=[ 5997], 99.95th=[ 6718], 00:29:10.604 | 99.99th=[ 7308] 00:29:10.604 bw ( KiB/s): min=78448, max=80528, per=100.00%, avg=79285.33, stdev=1097.64, samples=3 00:29:10.604 iops : min=19612, max=20132, avg=19821.33, stdev=274.41, samples=3 00:29:10.604 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:29:10.604 lat (msec) : 2=0.08%, 4=99.29%, 10=0.58% 00:29:10.604 cpu : usr=99.85%, sys=0.00%, ctx=22, majf=0, minf=39 00:29:10.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:10.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:10.604 issued rwts: total=39629,39532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:10.604 00:29:10.604 Run status group 0 (all jobs): 00:29:10.604 READ: bw=77.4MiB/s (81.1MB/s), 77.4MiB/s-77.4MiB/s (81.1MB/s-81.1MB/s), io=155MiB (162MB), run=2001-2001msec 00:29:10.604 WRITE: bw=77.2MiB/s (80.9MB/s), 77.2MiB/s-77.2MiB/s (80.9MB/s-80.9MB/s), io=154MiB (162MB), run=2001-2001msec 00:29:10.863 ----------------------------------------------------- 00:29:10.863 Suppressions used: 00:29:10.863 count bytes template 00:29:10.863 1 32 /usr/src/fio/parse.c 00:29:10.863 ----------------------------------------------------- 00:29:10.863 00:29:10.863 16:46:42 -- nvme/nvme.sh@44 -- # ran_fio=true 00:29:10.863 16:46:42 -- nvme/nvme.sh@46 -- # true 00:29:10.863 00:29:10.863 real 0m4.313s 00:29:10.863 user 0m3.490s 00:29:10.863 sys 0m0.508s 00:29:10.863 16:46:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:10.863 16:46:42 -- common/autotest_common.sh@10 -- # set +x 00:29:10.863 ************************************ 00:29:10.863 END TEST nvme_fio 00:29:10.863 ************************************ 00:29:11.122 ************************************ 00:29:11.122 END TEST nvme 00:29:11.122 ************************************ 00:29:11.122 00:29:11.122 real 0m46.123s 00:29:11.122 user 1m56.543s 00:29:11.122 sys 0m10.606s 00:29:11.122 16:46:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:11.122 16:46:42 -- common/autotest_common.sh@10 -- # set +x 00:29:11.122 16:46:42 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:29:11.122 16:46:42 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:11.122 16:46:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:11.122 16:46:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:11.122 16:46:42 -- common/autotest_common.sh@10 -- # set +x 00:29:11.122 ************************************ 00:29:11.122 START TEST nvme_scc 00:29:11.122 ************************************ 00:29:11.122 16:46:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:11.122 * Looking for test storage... 00:29:11.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:11.122 16:46:42 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:11.122 16:46:42 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:11.122 16:46:42 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:29:11.122 16:46:42 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:11.122 16:46:42 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:11.122 16:46:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.122 16:46:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.122 16:46:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.122 16:46:42 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:11.122 16:46:42 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:11.122 16:46:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:11.122 16:46:42 -- paths/export.sh@5 -- # export PATH 00:29:11.122 16:46:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:11.122 16:46:42 -- nvme/functions.sh@10 -- # ctrls=() 00:29:11.122 16:46:42 -- nvme/functions.sh@10 -- # declare -A ctrls 00:29:11.122 16:46:42 -- nvme/functions.sh@11 -- # nvmes=() 00:29:11.122 16:46:42 -- nvme/functions.sh@11 -- # declare -A nvmes 00:29:11.122 16:46:42 -- nvme/functions.sh@12 -- # bdfs=() 00:29:11.122 16:46:42 -- nvme/functions.sh@12 -- # declare -A bdfs 00:29:11.122 16:46:42 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:29:11.123 16:46:42 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:29:11.123 16:46:42 -- nvme/functions.sh@14 -- # nvme_name= 00:29:11.123 16:46:42 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:11.123 16:46:42 -- nvme/nvme_scc.sh@12 -- # uname 00:29:11.123 16:46:42 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:29:11.123 16:46:42 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:29:11.123 16:46:42 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:11.690 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:11.690 Waiting for block devices as requested 00:29:11.690 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:11.952 16:46:43 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:29:11.952 16:46:43 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:29:11.952 16:46:43 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:11.952 16:46:43 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:29:11.952 16:46:43 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:29:11.952 16:46:43 -- scripts/common.sh@15 -- # local i 00:29:11.952 16:46:43 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:29:11.952 16:46:43 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:11.952 16:46:43 -- scripts/common.sh@24 -- # return 0 00:29:11.952 16:46:43 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:29:11.952 16:46:43 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:29:11.952 16:46:43 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@18 -- # shift 00:29:11.952 16:46:43 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.952 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.952 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:29:11.952 16:46:43 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:29:11.953 16:46:43 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.953 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.953 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:29:11.954 16:46:43 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:11.954 16:46:43 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:29:11.954 16:46:43 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:29:11.954 16:46:43 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@18 -- # shift 00:29:11.954 16:46:43 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.954 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.954 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:29:11.954 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.955 16:46:43 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:11.955 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:11.955 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.956 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.956 16:46:43 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:11.956 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:11.956 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:11.956 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.956 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.956 16:46:43 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:11.956 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:11.956 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:11.956 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.956 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.956 16:46:43 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:11.956 16:46:43 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:11.956 16:46:43 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:11.956 16:46:43 -- nvme/functions.sh@21 -- # IFS=: 00:29:11.956 16:46:43 -- nvme/functions.sh@21 -- # read -r reg val 00:29:11.956 16:46:43 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:29:11.956 16:46:43 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:29:11.956 16:46:43 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:29:11.956 16:46:43 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:29:11.956 16:46:43 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:29:11.956 16:46:43 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:29:11.956 16:46:43 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:29:11.956 16:46:43 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:29:11.956 16:46:43 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:29:11.956 16:46:43 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:29:11.956 16:46:43 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:29:11.956 16:46:43 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:29:11.956 16:46:43 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:29:11.956 16:46:43 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:29:11.956 16:46:43 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:11.956 16:46:43 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:29:11.956 16:46:43 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:29:11.956 16:46:43 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:29:11.956 16:46:43 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:29:11.956 16:46:43 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:29:11.956 16:46:43 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:29:11.956 16:46:43 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:29:11.956 16:46:43 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:29:11.956 16:46:43 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:29:11.956 16:46:43 -- nvme/functions.sh@76 -- # echo 0x15d 00:29:11.956 16:46:43 -- nvme/functions.sh@184 -- # oncs=0x15d 00:29:11.956 16:46:43 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:29:11.956 16:46:43 -- nvme/functions.sh@197 -- # echo nvme0 00:29:11.956 16:46:43 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:29:11.956 16:46:43 -- nvme/functions.sh@206 -- # echo nvme0 00:29:11.956 16:46:43 -- nvme/functions.sh@207 -- # return 0 00:29:11.956 16:46:43 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:29:11.956 16:46:43 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:29:11.956 16:46:43 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:12.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:12.524 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:29:14.429 16:46:45 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:29:14.429 16:46:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:29:14.429 16:46:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:14.429 16:46:45 -- common/autotest_common.sh@10 -- # set +x 00:29:14.429 ************************************ 00:29:14.429 START TEST nvme_simple_copy 00:29:14.429 ************************************ 00:29:14.429 16:46:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:29:14.688 Initializing NVMe Controllers 00:29:14.689 Attaching to 0000:00:06.0 00:29:14.689 Controller supports SCC. Attached to 0000:00:06.0 00:29:14.689 Namespace ID: 1 size: 5GB 00:29:14.689 Initialization complete. 00:29:14.689 00:29:14.689 Controller QEMU NVMe Ctrl (12340 ) 00:29:14.689 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:29:14.689 Namespace Block Size:4096 00:29:14.689 Writing LBAs 0 to 63 with Random Data 00:29:14.689 Copied LBAs from 0 - 63 to the Destination LBA 256 00:29:14.689 LBAs matching Written Data: 64 00:29:14.689 00:29:14.689 real 0m0.267s 00:29:14.689 user 0m0.075s 00:29:14.689 sys 0m0.094s 00:29:14.689 16:46:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.689 16:46:46 -- common/autotest_common.sh@10 -- # set +x 00:29:14.689 ************************************ 00:29:14.689 END TEST nvme_simple_copy 00:29:14.689 ************************************ 00:29:14.948 00:29:14.948 real 0m3.790s 00:29:14.948 user 0m0.785s 00:29:14.948 sys 0m2.907s 00:29:14.948 16:46:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.948 16:46:46 -- common/autotest_common.sh@10 -- # set +x 00:29:14.948 ************************************ 00:29:14.948 END TEST nvme_scc 00:29:14.948 ************************************ 00:29:14.948 16:46:46 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:29:14.948 16:46:46 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:29:14.948 16:46:46 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:29:14.948 16:46:46 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:29:14.948 16:46:46 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:29:14.948 16:46:46 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:14.948 16:46:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:14.948 16:46:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:14.948 16:46:46 -- common/autotest_common.sh@10 -- # set +x 00:29:14.948 ************************************ 00:29:14.948 START TEST nvme_rpc 00:29:14.948 ************************************ 00:29:14.948 16:46:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:14.948 * Looking for test storage... 00:29:14.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:14.948 16:46:46 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:14.948 16:46:46 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:29:14.948 16:46:46 -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:14.948 16:46:46 -- common/autotest_common.sh@1509 -- # local bdfs 00:29:14.948 16:46:46 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:29:14.948 16:46:46 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:29:14.948 16:46:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:14.948 16:46:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:14.948 16:46:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:14.948 16:46:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:14.948 16:46:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:15.207 16:46:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:15.207 16:46:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:15.207 16:46:46 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:29:15.207 16:46:46 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:29:15.207 16:46:46 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=149957 00:29:15.207 16:46:46 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:29:15.207 16:46:46 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:15.207 16:46:46 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 149957 00:29:15.207 16:46:46 -- common/autotest_common.sh@819 -- # '[' -z 149957 ']' 00:29:15.207 16:46:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.207 16:46:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:15.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.207 16:46:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.207 16:46:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:15.207 16:46:46 -- common/autotest_common.sh@10 -- # set +x 00:29:15.207 [2024-07-13 16:46:46.543198] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:15.207 [2024-07-13 16:46:46.543511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149957 ] 00:29:15.466 [2024-07-13 16:46:46.705862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:15.466 [2024-07-13 16:46:46.783230] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:15.466 [2024-07-13 16:46:46.783652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.466 [2024-07-13 16:46:46.783648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.034 16:46:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:16.034 16:46:47 -- common/autotest_common.sh@852 -- # return 0 00:29:16.034 16:46:47 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:29:16.293 Nvme0n1 00:29:16.293 16:46:47 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:29:16.293 16:46:47 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:29:16.552 request: 00:29:16.552 { 00:29:16.552 "filename": "non_existing_file", 00:29:16.552 "bdev_name": "Nvme0n1", 00:29:16.552 "method": "bdev_nvme_apply_firmware", 00:29:16.552 "req_id": 1 00:29:16.552 } 00:29:16.552 Got JSON-RPC error response 00:29:16.552 response: 00:29:16.552 { 00:29:16.552 "code": -32603, 00:29:16.552 "message": "open file failed." 00:29:16.552 } 00:29:16.552 16:46:47 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:29:16.552 16:46:47 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:29:16.552 16:46:47 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:16.812 16:46:48 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:29:16.812 16:46:48 -- nvme/nvme_rpc.sh@40 -- # killprocess 149957 00:29:16.812 16:46:48 -- common/autotest_common.sh@926 -- # '[' -z 149957 ']' 00:29:16.812 16:46:48 -- common/autotest_common.sh@930 -- # kill -0 149957 00:29:16.812 16:46:48 -- common/autotest_common.sh@931 -- # uname 00:29:16.812 16:46:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:16.812 16:46:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 149957 00:29:16.812 16:46:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:16.812 16:46:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:16.812 killing process with pid 149957 00:29:16.812 16:46:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 149957' 00:29:16.812 16:46:48 -- common/autotest_common.sh@945 -- # kill 149957 00:29:16.812 16:46:48 -- common/autotest_common.sh@950 -- # wait 149957 00:29:17.381 ************************************ 00:29:17.381 END TEST nvme_rpc 00:29:17.381 ************************************ 00:29:17.381 00:29:17.381 real 0m2.514s 00:29:17.381 user 0m4.430s 00:29:17.381 sys 0m0.844s 00:29:17.381 16:46:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:17.381 16:46:48 -- common/autotest_common.sh@10 -- # set +x 00:29:17.381 16:46:48 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:17.381 16:46:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:17.381 16:46:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:17.381 16:46:48 -- common/autotest_common.sh@10 -- # set +x 00:29:17.381 ************************************ 00:29:17.381 START TEST nvme_rpc_timeouts 00:29:17.381 ************************************ 00:29:17.381 16:46:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:17.641 * Looking for test storage... 00:29:17.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:17.641 16:46:48 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:17.641 16:46:48 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_150021 00:29:17.641 16:46:48 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_150021 00:29:17.641 16:46:48 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=150053 00:29:17.641 16:46:48 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:29:17.641 16:46:48 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 150053 00:29:17.641 16:46:48 -- common/autotest_common.sh@819 -- # '[' -z 150053 ']' 00:29:17.641 16:46:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.641 16:46:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:17.641 16:46:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.641 16:46:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:17.641 16:46:48 -- common/autotest_common.sh@10 -- # set +x 00:29:17.641 16:46:48 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:17.641 [2024-07-13 16:46:49.036051] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:17.641 [2024-07-13 16:46:49.036348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150053 ] 00:29:17.900 [2024-07-13 16:46:49.195252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:17.900 [2024-07-13 16:46:49.275738] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:17.900 [2024-07-13 16:46:49.276216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.900 [2024-07-13 16:46:49.276216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.838 16:46:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:18.838 Checking default timeout settings: 00:29:18.838 16:46:49 -- common/autotest_common.sh@852 -- # return 0 00:29:18.838 16:46:49 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:29:18.838 16:46:49 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:18.838 16:46:50 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:29:18.838 Making settings changes with rpc: 00:29:18.838 16:46:50 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:29:19.096 Check default vs. modified settings: 00:29:19.096 16:46:50 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:29:19.096 16:46:50 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_150021 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_150021 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:29:19.664 Setting action_on_timeout is changed as expected. 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_150021 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_150021 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:29:19.664 Setting timeout_us is changed as expected. 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_150021 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_150021 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:29:19.664 Setting timeout_admin_us is changed as expected. 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_150021 /tmp/settings_modified_150021 00:29:19.664 16:46:50 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 150053 00:29:19.664 16:46:50 -- common/autotest_common.sh@926 -- # '[' -z 150053 ']' 00:29:19.664 16:46:50 -- common/autotest_common.sh@930 -- # kill -0 150053 00:29:19.664 16:46:50 -- common/autotest_common.sh@931 -- # uname 00:29:19.664 16:46:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:19.664 16:46:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150053 00:29:19.664 16:46:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:19.664 16:46:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:19.664 killing process with pid 150053 00:29:19.664 16:46:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150053' 00:29:19.664 16:46:50 -- common/autotest_common.sh@945 -- # kill 150053 00:29:19.664 16:46:50 -- common/autotest_common.sh@950 -- # wait 150053 00:29:20.231 RPC TIMEOUT SETTING TEST PASSED. 00:29:20.231 16:46:51 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:29:20.231 ************************************ 00:29:20.231 END TEST nvme_rpc_timeouts 00:29:20.231 ************************************ 00:29:20.231 00:29:20.231 real 0m2.771s 00:29:20.231 user 0m5.261s 00:29:20.231 sys 0m0.837s 00:29:20.231 16:46:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:20.231 16:46:51 -- common/autotest_common.sh@10 -- # set +x 00:29:20.231 16:46:51 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:29:20.231 16:46:51 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:29:20.231 16:46:51 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:29:20.231 16:46:51 -- spdk/autotest.sh@268 -- # timing_exit lib 00:29:20.231 16:46:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:20.231 16:46:51 -- common/autotest_common.sh@10 -- # set +x 00:29:20.490 16:46:51 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:20.490 16:46:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:20.490 16:46:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:20.490 16:46:51 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:20.490 16:46:51 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:29:20.490 16:46:51 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:29:20.490 16:46:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:20.490 16:46:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:20.490 16:46:51 -- common/autotest_common.sh@10 -- # set +x 00:29:20.490 ************************************ 00:29:20.490 START TEST blockdev_raid5f 00:29:20.490 ************************************ 00:29:20.490 16:46:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:29:20.490 * Looking for test storage... 00:29:20.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:20.491 16:46:51 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:20.491 16:46:51 -- bdev/nbd_common.sh@6 -- # set -e 00:29:20.491 16:46:51 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:20.491 16:46:51 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:20.491 16:46:51 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:20.491 16:46:51 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:20.491 16:46:51 -- bdev/blockdev.sh@18 -- # : 00:29:20.491 16:46:51 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:20.491 16:46:51 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:20.491 16:46:51 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:20.491 16:46:51 -- bdev/blockdev.sh@672 -- # uname -s 00:29:20.491 16:46:51 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:20.491 16:46:51 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:20.491 16:46:51 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:29:20.491 16:46:51 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:20.491 16:46:51 -- bdev/blockdev.sh@682 -- # dek= 00:29:20.491 16:46:51 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:20.491 16:46:51 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:20.491 16:46:51 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:20.491 16:46:51 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:29:20.491 16:46:51 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:29:20.491 16:46:51 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:20.491 16:46:51 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=150180 00:29:20.491 16:46:51 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:20.491 16:46:51 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:20.491 16:46:51 -- bdev/blockdev.sh@47 -- # waitforlisten 150180 00:29:20.491 16:46:51 -- common/autotest_common.sh@819 -- # '[' -z 150180 ']' 00:29:20.491 16:46:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.491 16:46:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:20.491 16:46:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.491 16:46:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:20.491 16:46:51 -- common/autotest_common.sh@10 -- # set +x 00:29:20.491 [2024-07-13 16:46:51.951973] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:20.491 [2024-07-13 16:46:51.952293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150180 ] 00:29:20.749 [2024-07-13 16:46:52.107456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.749 [2024-07-13 16:46:52.180399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:20.749 [2024-07-13 16:46:52.180628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.315 16:46:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:21.315 16:46:52 -- common/autotest_common.sh@852 -- # return 0 00:29:21.315 16:46:52 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:21.315 16:46:52 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:29:21.315 16:46:52 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:29:21.315 16:46:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.315 16:46:52 -- common/autotest_common.sh@10 -- # set +x 00:29:21.315 Malloc0 00:29:21.315 Malloc1 00:29:21.315 Malloc2 00:29:21.315 16:46:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.315 16:46:52 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:21.315 16:46:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.315 16:46:52 -- common/autotest_common.sh@10 -- # set +x 00:29:21.315 16:46:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.315 16:46:52 -- bdev/blockdev.sh@738 -- # cat 00:29:21.315 16:46:52 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:21.315 16:46:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.315 16:46:52 -- common/autotest_common.sh@10 -- # set +x 00:29:21.315 16:46:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.315 16:46:52 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:21.315 16:46:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.575 16:46:52 -- common/autotest_common.sh@10 -- # set +x 00:29:21.575 16:46:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.575 16:46:52 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:21.575 16:46:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.575 16:46:52 -- common/autotest_common.sh@10 -- # set +x 00:29:21.575 16:46:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.575 16:46:52 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:21.575 16:46:52 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:21.575 16:46:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.575 16:46:52 -- common/autotest_common.sh@10 -- # set +x 00:29:21.575 16:46:52 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:21.575 16:46:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.575 16:46:52 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:21.575 16:46:52 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "85ffc22e-71ed-431b-af3e-02f5fb5835c8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "85ffc22e-71ed-431b-af3e-02f5fb5835c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "85ffc22e-71ed-431b-af3e-02f5fb5835c8",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "e0c450d6-8d2d-4780-a3eb-cbffe6542013",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "79e1b17e-9553-4007-9be6-61995f5cffde",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "24cb1fa1-f115-43b8-ab98-cdc5fa56e547",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:29:21.575 16:46:52 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:21.575 16:46:52 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:21.575 16:46:52 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:29:21.575 16:46:52 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:21.575 16:46:52 -- bdev/blockdev.sh@752 -- # killprocess 150180 00:29:21.575 16:46:52 -- common/autotest_common.sh@926 -- # '[' -z 150180 ']' 00:29:21.575 16:46:52 -- common/autotest_common.sh@930 -- # kill -0 150180 00:29:21.575 16:46:52 -- common/autotest_common.sh@931 -- # uname 00:29:21.575 16:46:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:21.575 16:46:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150180 00:29:21.575 16:46:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:21.575 killing process with pid 150180 00:29:21.575 16:46:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:21.575 16:46:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150180' 00:29:21.575 16:46:52 -- common/autotest_common.sh@945 -- # kill 150180 00:29:21.575 16:46:52 -- common/autotest_common.sh@950 -- # wait 150180 00:29:22.510 16:46:53 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:22.510 16:46:53 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:29:22.510 16:46:53 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:22.510 16:46:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:22.510 16:46:53 -- common/autotest_common.sh@10 -- # set +x 00:29:22.510 ************************************ 00:29:22.510 START TEST bdev_hello_world 00:29:22.510 ************************************ 00:29:22.510 16:46:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:29:22.510 [2024-07-13 16:46:53.777816] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:22.510 [2024-07-13 16:46:53.778098] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150234 ] 00:29:22.510 [2024-07-13 16:46:53.934819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.769 [2024-07-13 16:46:54.009419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.028 [2024-07-13 16:46:54.276906] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:23.028 [2024-07-13 16:46:54.277021] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:29:23.028 [2024-07-13 16:46:54.277081] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:23.028 [2024-07-13 16:46:54.277522] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:23.028 [2024-07-13 16:46:54.277711] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:23.028 [2024-07-13 16:46:54.277751] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:23.028 [2024-07-13 16:46:54.277834] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:23.028 00:29:23.028 [2024-07-13 16:46:54.277897] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:23.287 00:29:23.287 real 0m1.028s 00:29:23.287 user 0m0.608s 00:29:23.287 sys 0m0.307s 00:29:23.287 16:46:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.287 16:46:54 -- common/autotest_common.sh@10 -- # set +x 00:29:23.287 ************************************ 00:29:23.287 END TEST bdev_hello_world 00:29:23.287 ************************************ 00:29:23.549 16:46:54 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:23.549 16:46:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:23.549 16:46:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:23.549 16:46:54 -- common/autotest_common.sh@10 -- # set +x 00:29:23.549 ************************************ 00:29:23.549 START TEST bdev_bounds 00:29:23.549 ************************************ 00:29:23.549 16:46:54 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:29:23.549 16:46:54 -- bdev/blockdev.sh@288 -- # bdevio_pid=150272 00:29:23.549 16:46:54 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:23.549 Process bdevio pid: 150272 00:29:23.549 16:46:54 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 150272' 00:29:23.549 16:46:54 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:23.549 16:46:54 -- bdev/blockdev.sh@291 -- # waitforlisten 150272 00:29:23.549 16:46:54 -- common/autotest_common.sh@819 -- # '[' -z 150272 ']' 00:29:23.549 16:46:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.549 16:46:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:23.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.549 16:46:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.549 16:46:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:23.549 16:46:54 -- common/autotest_common.sh@10 -- # set +x 00:29:23.549 [2024-07-13 16:46:54.859901] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:23.549 [2024-07-13 16:46:54.860108] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150272 ] 00:29:23.549 [2024-07-13 16:46:55.013338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:23.831 [2024-07-13 16:46:55.095687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.831 [2024-07-13 16:46:55.095884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.831 [2024-07-13 16:46:55.095892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.433 16:46:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:24.433 16:46:55 -- common/autotest_common.sh@852 -- # return 0 00:29:24.433 16:46:55 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:24.433 I/O targets: 00:29:24.433 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:29:24.433 00:29:24.433 00:29:24.433 CUnit - A unit testing framework for C - Version 2.1-3 00:29:24.433 http://cunit.sourceforge.net/ 00:29:24.433 00:29:24.433 00:29:24.433 Suite: bdevio tests on: raid5f 00:29:24.433 Test: blockdev write read block ...passed 00:29:24.433 Test: blockdev write zeroes read block ...passed 00:29:24.433 Test: blockdev write zeroes read no split ...passed 00:29:24.433 Test: blockdev write zeroes read split ...passed 00:29:24.692 Test: blockdev write zeroes read split partial ...passed 00:29:24.692 Test: blockdev reset ...passed 00:29:24.692 Test: blockdev write read 8 blocks ...passed 00:29:24.692 Test: blockdev write read size > 128k ...passed 00:29:24.692 Test: blockdev write read invalid size ...passed 00:29:24.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:24.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:24.692 Test: blockdev write read max offset ...passed 00:29:24.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:24.692 Test: blockdev writev readv 8 blocks ...passed 00:29:24.692 Test: blockdev writev readv 30 x 1block ...passed 00:29:24.692 Test: blockdev writev readv block ...passed 00:29:24.692 Test: blockdev writev readv size > 128k ...passed 00:29:24.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:24.692 Test: blockdev comparev and writev ...passed 00:29:24.692 Test: blockdev nvme passthru rw ...passed 00:29:24.692 Test: blockdev nvme passthru vendor specific ...passed 00:29:24.692 Test: blockdev nvme admin passthru ...passed 00:29:24.692 Test: blockdev copy ...passed 00:29:24.692 00:29:24.692 Run Summary: Type Total Ran Passed Failed Inactive 00:29:24.692 suites 1 1 n/a 0 0 00:29:24.692 tests 23 23 23 0 0 00:29:24.692 asserts 130 130 130 0 n/a 00:29:24.692 00:29:24.692 Elapsed time = 0.277 seconds 00:29:24.692 0 00:29:24.692 16:46:55 -- bdev/blockdev.sh@293 -- # killprocess 150272 00:29:24.693 16:46:55 -- common/autotest_common.sh@926 -- # '[' -z 150272 ']' 00:29:24.693 16:46:55 -- common/autotest_common.sh@930 -- # kill -0 150272 00:29:24.693 16:46:55 -- common/autotest_common.sh@931 -- # uname 00:29:24.693 16:46:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:24.693 16:46:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150272 00:29:24.693 16:46:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:24.693 16:46:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:24.693 16:46:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150272' 00:29:24.693 killing process with pid 150272 00:29:24.693 16:46:56 -- common/autotest_common.sh@945 -- # kill 150272 00:29:24.693 16:46:56 -- common/autotest_common.sh@950 -- # wait 150272 00:29:25.261 16:46:56 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:25.261 00:29:25.261 real 0m1.664s 00:29:25.261 user 0m3.835s 00:29:25.261 sys 0m0.468s 00:29:25.261 16:46:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.261 16:46:56 -- common/autotest_common.sh@10 -- # set +x 00:29:25.261 ************************************ 00:29:25.261 END TEST bdev_bounds 00:29:25.261 ************************************ 00:29:25.261 16:46:56 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:29:25.261 16:46:56 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:29:25.261 16:46:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.261 16:46:56 -- common/autotest_common.sh@10 -- # set +x 00:29:25.261 ************************************ 00:29:25.261 START TEST bdev_nbd 00:29:25.261 ************************************ 00:29:25.261 16:46:56 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:29:25.261 16:46:56 -- bdev/blockdev.sh@298 -- # uname -s 00:29:25.261 16:46:56 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:25.261 16:46:56 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:25.261 16:46:56 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:25.261 16:46:56 -- bdev/blockdev.sh@302 -- # bdev_all=('raid5f') 00:29:25.261 16:46:56 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:25.262 16:46:56 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:29:25.262 16:46:56 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:25.262 16:46:56 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:25.262 16:46:56 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:25.262 16:46:56 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:29:25.262 16:46:56 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:29:25.262 16:46:56 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:25.262 16:46:56 -- bdev/blockdev.sh@313 -- # bdev_list=('raid5f') 00:29:25.262 16:46:56 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:25.262 16:46:56 -- bdev/blockdev.sh@316 -- # nbd_pid=150328 00:29:25.262 16:46:56 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:25.262 16:46:56 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:25.262 16:46:56 -- bdev/blockdev.sh@318 -- # waitforlisten 150328 /var/tmp/spdk-nbd.sock 00:29:25.262 16:46:56 -- common/autotest_common.sh@819 -- # '[' -z 150328 ']' 00:29:25.262 16:46:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:25.262 16:46:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:25.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:25.262 16:46:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:25.262 16:46:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:25.262 16:46:56 -- common/autotest_common.sh@10 -- # set +x 00:29:25.262 [2024-07-13 16:46:56.603560] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:25.262 [2024-07-13 16:46:56.603789] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.521 [2024-07-13 16:46:56.746486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.521 [2024-07-13 16:46:56.820809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.088 16:46:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:26.088 16:46:57 -- common/autotest_common.sh@852 -- # return 0 00:29:26.088 16:46:57 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:29:26.088 16:46:57 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.088 16:46:57 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:29:26.088 16:46:57 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:26.088 16:46:57 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:29:26.088 16:46:57 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.088 16:46:57 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:29:26.088 16:46:57 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:26.088 16:46:57 -- bdev/nbd_common.sh@24 -- # local i 00:29:26.088 16:46:57 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:26.088 16:46:57 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:26.088 16:46:57 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:26.088 16:46:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:29:26.347 16:46:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:26.347 16:46:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:26.347 16:46:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:26.347 16:46:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:26.347 16:46:57 -- common/autotest_common.sh@857 -- # local i 00:29:26.347 16:46:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:26.347 16:46:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:26.347 16:46:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:26.347 16:46:57 -- common/autotest_common.sh@861 -- # break 00:29:26.347 16:46:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:26.347 16:46:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:26.347 16:46:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:26.347 1+0 records in 00:29:26.347 1+0 records out 00:29:26.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318683 s, 12.9 MB/s 00:29:26.347 16:46:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.347 16:46:57 -- common/autotest_common.sh@874 -- # size=4096 00:29:26.347 16:46:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.347 16:46:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:26.347 16:46:57 -- common/autotest_common.sh@877 -- # return 0 00:29:26.347 16:46:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:26.347 16:46:57 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:26.347 16:46:57 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:26.605 16:46:58 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:26.605 { 00:29:26.605 "nbd_device": "/dev/nbd0", 00:29:26.605 "bdev_name": "raid5f" 00:29:26.605 } 00:29:26.605 ]' 00:29:26.605 16:46:58 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:26.605 16:46:58 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:26.605 { 00:29:26.605 "nbd_device": "/dev/nbd0", 00:29:26.605 "bdev_name": "raid5f" 00:29:26.605 } 00:29:26.605 ]' 00:29:26.605 16:46:58 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:26.605 16:46:58 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:26.605 16:46:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.605 16:46:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:26.605 16:46:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:26.605 16:46:58 -- bdev/nbd_common.sh@51 -- # local i 00:29:26.605 16:46:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:26.605 16:46:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:26.863 16:46:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:26.863 16:46:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:26.863 16:46:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:26.863 16:46:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:26.863 16:46:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:26.863 16:46:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:26.863 16:46:58 -- bdev/nbd_common.sh@41 -- # break 00:29:26.863 16:46:58 -- bdev/nbd_common.sh@45 -- # return 0 00:29:26.863 16:46:58 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:26.863 16:46:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.863 16:46:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:27.120 16:46:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:27.120 16:46:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:27.120 16:46:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@65 -- # true 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@65 -- # count=0 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@122 -- # count=0 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@127 -- # return 0 00:29:27.378 16:46:58 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@12 -- # local i 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:27.378 16:46:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:29:27.637 /dev/nbd0 00:29:27.637 16:46:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:27.637 16:46:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:27.637 16:46:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:27.637 16:46:58 -- common/autotest_common.sh@857 -- # local i 00:29:27.637 16:46:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:27.637 16:46:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:27.637 16:46:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:27.637 16:46:58 -- common/autotest_common.sh@861 -- # break 00:29:27.637 16:46:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:27.637 16:46:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:27.637 16:46:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:27.637 1+0 records in 00:29:27.637 1+0 records out 00:29:27.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025353 s, 16.2 MB/s 00:29:27.637 16:46:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:27.637 16:46:58 -- common/autotest_common.sh@874 -- # size=4096 00:29:27.637 16:46:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:27.637 16:46:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:27.637 16:46:58 -- common/autotest_common.sh@877 -- # return 0 00:29:27.637 16:46:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:27.637 16:46:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:27.637 16:46:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:27.637 16:46:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:27.637 16:46:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:27.895 16:46:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:27.895 { 00:29:27.896 "nbd_device": "/dev/nbd0", 00:29:27.896 "bdev_name": "raid5f" 00:29:27.896 } 00:29:27.896 ]' 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:27.896 { 00:29:27.896 "nbd_device": "/dev/nbd0", 00:29:27.896 "bdev_name": "raid5f" 00:29:27.896 } 00:29:27.896 ]' 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@65 -- # count=1 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@95 -- # count=1 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:27.896 256+0 records in 00:29:27.896 256+0 records out 00:29:27.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010387 s, 101 MB/s 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:27.896 256+0 records in 00:29:27.896 256+0 records out 00:29:27.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265865 s, 39.4 MB/s 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@51 -- # local i 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:27.896 16:46:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:28.154 16:46:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:28.154 16:46:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:28.154 16:46:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:28.154 16:46:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:28.154 16:46:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:28.154 16:46:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:28.154 16:46:59 -- bdev/nbd_common.sh@41 -- # break 00:29:28.154 16:46:59 -- bdev/nbd_common.sh@45 -- # return 0 00:29:28.154 16:46:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:28.154 16:46:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:28.154 16:46:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:28.412 16:46:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:28.412 16:46:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:28.412 16:46:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:28.412 16:46:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:28.413 16:46:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:28.413 16:46:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:28.413 16:46:59 -- bdev/nbd_common.sh@65 -- # true 00:29:28.413 16:46:59 -- bdev/nbd_common.sh@65 -- # count=0 00:29:28.413 16:46:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:28.413 16:46:59 -- bdev/nbd_common.sh@104 -- # count=0 00:29:28.413 16:46:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:28.413 16:46:59 -- bdev/nbd_common.sh@109 -- # return 0 00:29:28.413 16:46:59 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:28.413 16:46:59 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:28.413 16:46:59 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:29:28.413 16:46:59 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:28.413 16:46:59 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:28.413 16:46:59 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:28.671 malloc_lvol_verify 00:29:28.672 16:47:00 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:28.931 b16a8e68-68ca-4ac0-8219-2d34e40766f3 00:29:28.931 16:47:00 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:28.931 5397abf2-4732-40ec-a06d-56c10cb997e9 00:29:29.190 16:47:00 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:29.190 /dev/nbd0 00:29:29.190 16:47:00 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:29.190 mke2fs 1.46.5 (30-Dec-2021) 00:29:29.190 00:29:29.190 Filesystem too small for a journal 00:29:29.190 Discarding device blocks: 0/1024 done 00:29:29.190 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:29.190 00:29:29.190 Allocating group tables: 0/1 done 00:29:29.190 Writing inode tables: 0/1 done 00:29:29.190 Writing superblocks and filesystem accounting information: 0/1 done 00:29:29.190 00:29:29.190 16:47:00 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:29.190 16:47:00 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:29.190 16:47:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:29.190 16:47:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:29.190 16:47:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:29.190 16:47:00 -- bdev/nbd_common.sh@51 -- # local i 00:29:29.190 16:47:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:29.190 16:47:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:29.450 16:47:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:29.450 16:47:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:29.450 16:47:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:29.450 16:47:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:29.450 16:47:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:29.450 16:47:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:29.450 16:47:00 -- bdev/nbd_common.sh@41 -- # break 00:29:29.450 16:47:00 -- bdev/nbd_common.sh@45 -- # return 0 00:29:29.450 16:47:00 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:29.450 16:47:00 -- bdev/nbd_common.sh@147 -- # return 0 00:29:29.450 16:47:00 -- bdev/blockdev.sh@324 -- # killprocess 150328 00:29:29.450 16:47:00 -- common/autotest_common.sh@926 -- # '[' -z 150328 ']' 00:29:29.450 16:47:00 -- common/autotest_common.sh@930 -- # kill -0 150328 00:29:29.450 16:47:00 -- common/autotest_common.sh@931 -- # uname 00:29:29.450 16:47:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:29.450 16:47:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150328 00:29:29.450 16:47:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:29.450 16:47:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:29.450 killing process with pid 150328 00:29:29.450 16:47:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150328' 00:29:29.450 16:47:00 -- common/autotest_common.sh@945 -- # kill 150328 00:29:29.450 16:47:00 -- common/autotest_common.sh@950 -- # wait 150328 00:29:30.019 16:47:01 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:30.019 00:29:30.019 real 0m4.740s 00:29:30.019 user 0m6.702s 00:29:30.019 sys 0m1.475s 00:29:30.019 16:47:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:30.019 16:47:01 -- common/autotest_common.sh@10 -- # set +x 00:29:30.019 ************************************ 00:29:30.019 END TEST bdev_nbd 00:29:30.019 ************************************ 00:29:30.019 16:47:01 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:30.019 16:47:01 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:29:30.019 16:47:01 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:29:30.019 16:47:01 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:29:30.019 16:47:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:30.019 16:47:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:30.019 16:47:01 -- common/autotest_common.sh@10 -- # set +x 00:29:30.019 ************************************ 00:29:30.019 START TEST bdev_fio 00:29:30.019 ************************************ 00:29:30.019 16:47:01 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:29:30.019 16:47:01 -- bdev/blockdev.sh@329 -- # local env_context 00:29:30.019 16:47:01 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:29:30.019 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:29:30.019 16:47:01 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:29:30.019 16:47:01 -- bdev/blockdev.sh@337 -- # echo '' 00:29:30.019 16:47:01 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:29:30.019 16:47:01 -- bdev/blockdev.sh@337 -- # env_context= 00:29:30.019 16:47:01 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:29:30.019 16:47:01 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:30.019 16:47:01 -- common/autotest_common.sh@1260 -- # local workload=verify 00:29:30.019 16:47:01 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:29:30.019 16:47:01 -- common/autotest_common.sh@1262 -- # local env_context= 00:29:30.019 16:47:01 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:29:30.019 16:47:01 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:30.019 16:47:01 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:29:30.019 16:47:01 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:29:30.019 16:47:01 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:30.019 16:47:01 -- common/autotest_common.sh@1280 -- # cat 00:29:30.019 16:47:01 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:29:30.019 16:47:01 -- common/autotest_common.sh@1293 -- # cat 00:29:30.019 16:47:01 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:29:30.019 16:47:01 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:29:30.019 16:47:01 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:29:30.019 16:47:01 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:29:30.019 16:47:01 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:29:30.019 16:47:01 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:29:30.019 16:47:01 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:29:30.019 16:47:01 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:29:30.019 16:47:01 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:30.019 16:47:01 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:29:30.019 16:47:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:30.019 16:47:01 -- common/autotest_common.sh@10 -- # set +x 00:29:30.019 ************************************ 00:29:30.019 START TEST bdev_fio_rw_verify 00:29:30.019 ************************************ 00:29:30.019 16:47:01 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:30.019 16:47:01 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:30.019 16:47:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:30.019 16:47:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:30.019 16:47:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:30.019 16:47:01 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:30.019 16:47:01 -- common/autotest_common.sh@1320 -- # shift 00:29:30.019 16:47:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:30.019 16:47:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:30.019 16:47:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:30.019 16:47:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:30.019 16:47:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:30.019 16:47:01 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:29:30.019 16:47:01 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:29:30.019 16:47:01 -- common/autotest_common.sh@1326 -- # break 00:29:30.019 16:47:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:30.020 16:47:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:30.279 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:30.279 fio-3.35 00:29:30.279 Starting 1 thread 00:29:42.481 00:29:42.481 job_raid5f: (groupid=0, jobs=1): err= 0: pid=150547: Sat Jul 13 16:47:12 2024 00:29:42.481 read: IOPS=12.8k, BW=49.9MiB/s (52.3MB/s)(499MiB/10001msec) 00:29:42.481 slat (usec): min=17, max=452, avg=18.52, stdev= 2.53 00:29:42.481 clat (usec): min=10, max=366, avg=125.03, stdev=44.49 00:29:42.481 lat (usec): min=30, max=676, avg=143.55, stdev=45.02 00:29:42.481 clat percentiles (usec): 00:29:42.481 | 50.000th=[ 130], 99.000th=[ 208], 99.900th=[ 318], 99.990th=[ 334], 00:29:42.481 | 99.999th=[ 355] 00:29:42.481 write: IOPS=13.4k, BW=52.4MiB/s (55.0MB/s)(518MiB/9878msec); 0 zone resets 00:29:42.481 slat (usec): min=8, max=334, avg=16.23, stdev= 3.15 00:29:42.481 clat (usec): min=56, max=1005, avg=284.78, stdev=40.20 00:29:42.481 lat (usec): min=71, max=1340, avg=301.01, stdev=41.13 00:29:42.481 clat percentiles (usec): 00:29:42.481 | 50.000th=[ 285], 99.000th=[ 379], 99.900th=[ 562], 99.990th=[ 693], 00:29:42.481 | 99.999th=[ 955] 00:29:42.481 bw ( KiB/s): min=49560, max=55600, per=98.78%, avg=53009.26, stdev=2039.63, samples=19 00:29:42.481 iops : min=12390, max=13900, avg=13252.32, stdev=509.91, samples=19 00:29:42.481 lat (usec) : 20=0.01%, 50=0.01%, 100=17.60%, 250=40.89%, 500=41.32% 00:29:42.481 lat (usec) : 750=0.18%, 1000=0.01% 00:29:42.481 lat (msec) : 2=0.01% 00:29:42.481 cpu : usr=99.34%, sys=0.65%, ctx=115, majf=0, minf=12175 00:29:42.481 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:42.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.481 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.481 issued rwts: total=127818,132521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.481 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:42.481 00:29:42.481 Run status group 0 (all jobs): 00:29:42.481 READ: bw=49.9MiB/s (52.3MB/s), 49.9MiB/s-49.9MiB/s (52.3MB/s-52.3MB/s), io=499MiB (524MB), run=10001-10001msec 00:29:42.481 WRITE: bw=52.4MiB/s (55.0MB/s), 52.4MiB/s-52.4MiB/s (55.0MB/s-55.0MB/s), io=518MiB (543MB), run=9878-9878msec 00:29:42.481 ----------------------------------------------------- 00:29:42.481 Suppressions used: 00:29:42.481 count bytes template 00:29:42.481 1 7 /usr/src/fio/parse.c 00:29:42.481 724 69504 /usr/src/fio/iolog.c 00:29:42.481 1 904 libcrypto.so 00:29:42.481 ----------------------------------------------------- 00:29:42.481 00:29:42.481 00:29:42.481 real 0m11.473s 00:29:42.481 user 0m12.064s 00:29:42.481 sys 0m0.804s 00:29:42.481 16:47:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:42.481 16:47:12 -- common/autotest_common.sh@10 -- # set +x 00:29:42.481 ************************************ 00:29:42.481 END TEST bdev_fio_rw_verify 00:29:42.481 ************************************ 00:29:42.481 16:47:12 -- bdev/blockdev.sh@348 -- # rm -f 00:29:42.481 16:47:12 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:42.481 16:47:12 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:29:42.481 16:47:12 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:42.481 16:47:12 -- common/autotest_common.sh@1260 -- # local workload=trim 00:29:42.481 16:47:12 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:29:42.481 16:47:12 -- common/autotest_common.sh@1262 -- # local env_context= 00:29:42.481 16:47:12 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:29:42.481 16:47:12 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:42.481 16:47:12 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:29:42.481 16:47:12 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:29:42.481 16:47:12 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:42.481 16:47:12 -- common/autotest_common.sh@1280 -- # cat 00:29:42.481 16:47:12 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:29:42.481 16:47:12 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:29:42.481 16:47:12 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:29:42.481 16:47:12 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "85ffc22e-71ed-431b-af3e-02f5fb5835c8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "85ffc22e-71ed-431b-af3e-02f5fb5835c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "85ffc22e-71ed-431b-af3e-02f5fb5835c8",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "e0c450d6-8d2d-4780-a3eb-cbffe6542013",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "79e1b17e-9553-4007-9be6-61995f5cffde",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "24cb1fa1-f115-43b8-ab98-cdc5fa56e547",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:29:42.481 16:47:12 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:29:42.481 16:47:13 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:29:42.481 16:47:13 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:42.481 /home/vagrant/spdk_repo/spdk 00:29:42.481 16:47:13 -- bdev/blockdev.sh@360 -- # popd 00:29:42.481 16:47:13 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:29:42.481 16:47:13 -- bdev/blockdev.sh@362 -- # return 0 00:29:42.481 00:29:42.481 real 0m11.697s 00:29:42.481 user 0m12.188s 00:29:42.481 sys 0m0.905s 00:29:42.481 16:47:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:42.481 ************************************ 00:29:42.481 END TEST bdev_fio 00:29:42.481 ************************************ 00:29:42.481 16:47:13 -- common/autotest_common.sh@10 -- # set +x 00:29:42.481 16:47:13 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:42.481 16:47:13 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:42.481 16:47:13 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:42.481 16:47:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:42.481 16:47:13 -- common/autotest_common.sh@10 -- # set +x 00:29:42.481 ************************************ 00:29:42.481 START TEST bdev_verify 00:29:42.481 ************************************ 00:29:42.481 16:47:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:42.481 [2024-07-13 16:47:13.190359] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:42.481 [2024-07-13 16:47:13.190616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150719 ] 00:29:42.481 [2024-07-13 16:47:13.347797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:42.481 [2024-07-13 16:47:13.443145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.481 [2024-07-13 16:47:13.443149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.481 Running I/O for 5 seconds... 00:29:47.753 00:29:47.753 Latency(us) 00:29:47.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.753 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:47.753 Verification LBA range: start 0x0 length 0x2000 00:29:47.753 raid5f : 5.01 6996.01 27.33 0.00 0.00 29003.23 249.66 22094.99 00:29:47.753 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:47.753 Verification LBA range: start 0x2000 length 0x2000 00:29:47.753 raid5f : 5.01 9348.74 36.52 0.00 0.00 21692.04 308.18 16103.13 00:29:47.753 =================================================================================================================== 00:29:47.753 Total : 16344.75 63.85 0.00 0.00 24822.30 249.66 22094.99 00:29:47.753 00:29:47.753 real 0m6.062s 00:29:47.753 user 0m11.137s 00:29:47.753 sys 0m0.340s 00:29:47.753 16:47:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:47.753 16:47:19 -- common/autotest_common.sh@10 -- # set +x 00:29:47.753 ************************************ 00:29:47.753 END TEST bdev_verify 00:29:47.753 ************************************ 00:29:48.012 16:47:19 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:48.012 16:47:19 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:48.012 16:47:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:48.012 16:47:19 -- common/autotest_common.sh@10 -- # set +x 00:29:48.012 ************************************ 00:29:48.012 START TEST bdev_verify_big_io 00:29:48.012 ************************************ 00:29:48.012 16:47:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:48.012 [2024-07-13 16:47:19.321322] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:48.012 [2024-07-13 16:47:19.322195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150815 ] 00:29:48.012 [2024-07-13 16:47:19.479852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:48.271 [2024-07-13 16:47:19.564744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.271 [2024-07-13 16:47:19.564749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.531 Running I/O for 5 seconds... 00:29:53.803 00:29:53.803 Latency(us) 00:29:53.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.803 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:53.803 Verification LBA range: start 0x0 length 0x200 00:29:53.803 raid5f : 5.17 535.52 33.47 0.00 0.00 6209249.15 197.00 199728.76 00:29:53.803 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:53.803 Verification LBA range: start 0x200 length 0x200 00:29:53.803 raid5f : 5.15 665.24 41.58 0.00 0.00 5011537.16 175.54 152792.50 00:29:53.803 =================================================================================================================== 00:29:53.803 Total : 1200.75 75.05 0.00 0.00 5547096.29 175.54 199728.76 00:29:54.062 00:29:54.062 real 0m6.199s 00:29:54.062 user 0m11.440s 00:29:54.062 sys 0m0.325s 00:29:54.062 16:47:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:54.062 ************************************ 00:29:54.062 END TEST bdev_verify_big_io 00:29:54.062 16:47:25 -- common/autotest_common.sh@10 -- # set +x 00:29:54.062 ************************************ 00:29:54.062 16:47:25 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:54.062 16:47:25 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:54.062 16:47:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:54.062 16:47:25 -- common/autotest_common.sh@10 -- # set +x 00:29:54.321 ************************************ 00:29:54.321 START TEST bdev_write_zeroes 00:29:54.321 ************************************ 00:29:54.321 16:47:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:54.321 [2024-07-13 16:47:25.596124] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:54.321 [2024-07-13 16:47:25.596449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150909 ] 00:29:54.321 [2024-07-13 16:47:25.752150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.580 [2024-07-13 16:47:25.838826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.838 Running I/O for 1 seconds... 00:29:55.772 00:29:55.772 Latency(us) 00:29:55.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.772 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:55.772 raid5f : 1.00 30282.27 118.29 0.00 0.00 4216.24 1310.72 5710.99 00:29:55.772 =================================================================================================================== 00:29:55.772 Total : 30282.27 118.29 0.00 0.00 4216.24 1310.72 5710.99 00:29:56.339 00:29:56.340 real 0m2.060s 00:29:56.340 user 0m1.590s 00:29:56.340 sys 0m0.357s 00:29:56.340 16:47:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.340 ************************************ 00:29:56.340 END TEST bdev_write_zeroes 00:29:56.340 16:47:27 -- common/autotest_common.sh@10 -- # set +x 00:29:56.340 ************************************ 00:29:56.340 16:47:27 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:56.340 16:47:27 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:56.340 16:47:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:56.340 16:47:27 -- common/autotest_common.sh@10 -- # set +x 00:29:56.340 ************************************ 00:29:56.340 START TEST bdev_json_nonenclosed 00:29:56.340 ************************************ 00:29:56.340 16:47:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:56.340 [2024-07-13 16:47:27.721411] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:56.340 [2024-07-13 16:47:27.721617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150959 ] 00:29:56.599 [2024-07-13 16:47:27.863896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.599 [2024-07-13 16:47:27.937640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.599 [2024-07-13 16:47:27.937848] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:56.599 [2024-07-13 16:47:27.937896] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:56.858 00:29:56.858 real 0m0.475s 00:29:56.858 user 0m0.237s 00:29:56.858 sys 0m0.139s 00:29:56.858 16:47:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.858 16:47:28 -- common/autotest_common.sh@10 -- # set +x 00:29:56.858 ************************************ 00:29:56.858 END TEST bdev_json_nonenclosed 00:29:56.858 ************************************ 00:29:56.859 16:47:28 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:56.859 16:47:28 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:56.859 16:47:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:56.859 16:47:28 -- common/autotest_common.sh@10 -- # set +x 00:29:56.859 ************************************ 00:29:56.859 START TEST bdev_json_nonarray 00:29:56.859 ************************************ 00:29:56.859 16:47:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:56.859 [2024-07-13 16:47:28.262904] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:56.859 [2024-07-13 16:47:28.263097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150990 ] 00:29:57.118 [2024-07-13 16:47:28.406498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.118 [2024-07-13 16:47:28.481275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.118 [2024-07-13 16:47:28.481501] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:57.118 [2024-07-13 16:47:28.481543] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:57.377 00:29:57.377 real 0m0.472s 00:29:57.377 user 0m0.224s 00:29:57.377 sys 0m0.149s 00:29:57.377 16:47:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.377 16:47:28 -- common/autotest_common.sh@10 -- # set +x 00:29:57.377 ************************************ 00:29:57.377 END TEST bdev_json_nonarray 00:29:57.377 ************************************ 00:29:57.377 16:47:28 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:29:57.377 16:47:28 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:29:57.377 16:47:28 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:29:57.377 16:47:28 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:29:57.377 16:47:28 -- bdev/blockdev.sh@809 -- # cleanup 00:29:57.377 16:47:28 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:57.377 16:47:28 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:57.377 16:47:28 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:29:57.377 16:47:28 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:29:57.377 16:47:28 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:29:57.377 16:47:28 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:29:57.377 ************************************ 00:29:57.377 END TEST blockdev_raid5f 00:29:57.377 ************************************ 00:29:57.377 00:29:57.377 real 0m37.009s 00:29:57.377 user 0m49.904s 00:29:57.377 sys 0m5.504s 00:29:57.377 16:47:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.377 16:47:28 -- common/autotest_common.sh@10 -- # set +x 00:29:57.377 16:47:28 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:29:57.377 16:47:28 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:29:57.377 16:47:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:57.377 16:47:28 -- common/autotest_common.sh@10 -- # set +x 00:29:57.377 16:47:28 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:29:57.377 16:47:28 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:29:57.377 16:47:28 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:29:57.377 16:47:28 -- common/autotest_common.sh@10 -- # set +x 00:29:59.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:59.914 Waiting for block devices as requested 00:29:59.914 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:00.484 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:00.484 Cleaning 00:30:00.484 Removing: /var/run/dpdk/spdk0/config 00:30:00.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:00.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:00.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:00.484 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:00.484 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:00.484 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:00.484 Removing: /dev/shm/spdk_tgt_trace.pid114995 00:30:00.484 Removing: /var/run/dpdk/spdk0 00:30:00.484 Removing: /var/run/dpdk/spdk_pid114811 00:30:00.484 Removing: /var/run/dpdk/spdk_pid114995 00:30:00.484 Removing: /var/run/dpdk/spdk_pid115274 00:30:00.484 Removing: /var/run/dpdk/spdk_pid115512 00:30:00.484 Removing: /var/run/dpdk/spdk_pid115675 00:30:00.484 Removing: /var/run/dpdk/spdk_pid115755 00:30:00.484 Removing: /var/run/dpdk/spdk_pid115840 00:30:00.484 Removing: /var/run/dpdk/spdk_pid115940 00:30:00.484 Removing: /var/run/dpdk/spdk_pid116030 00:30:00.484 Removing: /var/run/dpdk/spdk_pid116078 00:30:00.484 Removing: /var/run/dpdk/spdk_pid116118 00:30:00.484 Removing: /var/run/dpdk/spdk_pid116194 00:30:00.484 Removing: /var/run/dpdk/spdk_pid116305 00:30:00.484 Removing: /var/run/dpdk/spdk_pid116819 00:30:00.484 Removing: /var/run/dpdk/spdk_pid116880 00:30:00.484 Removing: /var/run/dpdk/spdk_pid116943 00:30:00.484 Removing: /var/run/dpdk/spdk_pid116964 00:30:00.484 Removing: /var/run/dpdk/spdk_pid117052 00:30:00.484 Removing: /var/run/dpdk/spdk_pid117073 00:30:00.484 Removing: /var/run/dpdk/spdk_pid117170 00:30:00.484 Removing: /var/run/dpdk/spdk_pid117191 00:30:00.484 Removing: /var/run/dpdk/spdk_pid117236 00:30:00.484 Removing: /var/run/dpdk/spdk_pid117259 00:30:00.484 Removing: /var/run/dpdk/spdk_pid117316 00:30:00.484 Removing: /var/run/dpdk/spdk_pid117339 00:30:00.484 Removing: /var/run/dpdk/spdk_pid117481 00:30:00.744 Removing: /var/run/dpdk/spdk_pid117531 00:30:00.744 Removing: /var/run/dpdk/spdk_pid117567 00:30:00.744 Removing: /var/run/dpdk/spdk_pid117654 00:30:00.744 Removing: /var/run/dpdk/spdk_pid117717 00:30:00.744 Removing: /var/run/dpdk/spdk_pid117749 00:30:00.744 Removing: /var/run/dpdk/spdk_pid117830 00:30:00.744 Removing: /var/run/dpdk/spdk_pid117867 00:30:00.744 Removing: /var/run/dpdk/spdk_pid117900 00:30:00.744 Removing: /var/run/dpdk/spdk_pid117935 00:30:00.744 Removing: /var/run/dpdk/spdk_pid117981 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118003 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118048 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118080 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118118 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118148 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118193 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118216 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118261 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118297 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118331 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118366 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118406 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118434 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118474 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118509 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118542 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118577 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118623 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118645 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118690 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118722 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118760 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118790 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118835 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118858 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118903 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118939 00:30:00.744 Removing: /var/run/dpdk/spdk_pid118973 00:30:00.744 Removing: /var/run/dpdk/spdk_pid119011 00:30:00.744 Removing: /var/run/dpdk/spdk_pid119054 00:30:00.744 Removing: /var/run/dpdk/spdk_pid119085 00:30:00.744 Removing: /var/run/dpdk/spdk_pid119128 00:30:00.744 Removing: /var/run/dpdk/spdk_pid119165 00:30:00.744 Removing: /var/run/dpdk/spdk_pid119211 00:30:00.744 Removing: /var/run/dpdk/spdk_pid119234 00:30:00.744 Removing: /var/run/dpdk/spdk_pid119281 00:30:00.744 Removing: /var/run/dpdk/spdk_pid119359 00:30:00.744 Removing: /var/run/dpdk/spdk_pid119467 00:30:00.744 Removing: /var/run/dpdk/spdk_pid119631 00:30:00.744 Removing: /var/run/dpdk/spdk_pid119697 00:30:00.744 Removing: /var/run/dpdk/spdk_pid119735 00:30:00.744 Removing: /var/run/dpdk/spdk_pid120931 00:30:00.744 Removing: /var/run/dpdk/spdk_pid121132 00:30:00.744 Removing: /var/run/dpdk/spdk_pid121320 00:30:00.744 Removing: /var/run/dpdk/spdk_pid121428 00:30:00.744 Removing: /var/run/dpdk/spdk_pid121547 00:30:00.744 Removing: /var/run/dpdk/spdk_pid121598 00:30:00.744 Removing: /var/run/dpdk/spdk_pid121636 00:30:00.744 Removing: /var/run/dpdk/spdk_pid121658 00:30:01.004 Removing: /var/run/dpdk/spdk_pid122129 00:30:01.004 Removing: /var/run/dpdk/spdk_pid122217 00:30:01.004 Removing: /var/run/dpdk/spdk_pid122314 00:30:01.004 Removing: /var/run/dpdk/spdk_pid122365 00:30:01.004 Removing: /var/run/dpdk/spdk_pid123509 00:30:01.004 Removing: /var/run/dpdk/spdk_pid124356 00:30:01.004 Removing: /var/run/dpdk/spdk_pid125226 00:30:01.004 Removing: /var/run/dpdk/spdk_pid126315 00:30:01.004 Removing: /var/run/dpdk/spdk_pid127372 00:30:01.004 Removing: /var/run/dpdk/spdk_pid128430 00:30:01.004 Removing: /var/run/dpdk/spdk_pid129895 00:30:01.004 Removing: /var/run/dpdk/spdk_pid131092 00:30:01.004 Removing: /var/run/dpdk/spdk_pid132289 00:30:01.004 Removing: /var/run/dpdk/spdk_pid132969 00:30:01.004 Removing: /var/run/dpdk/spdk_pid133505 00:30:01.004 Removing: /var/run/dpdk/spdk_pid134127 00:30:01.004 Removing: /var/run/dpdk/spdk_pid134610 00:30:01.004 Removing: /var/run/dpdk/spdk_pid135179 00:30:01.004 Removing: /var/run/dpdk/spdk_pid135724 00:30:01.004 Removing: /var/run/dpdk/spdk_pid136371 00:30:01.004 Removing: /var/run/dpdk/spdk_pid136876 00:30:01.004 Removing: /var/run/dpdk/spdk_pid138235 00:30:01.004 Removing: /var/run/dpdk/spdk_pid138816 00:30:01.004 Removing: /var/run/dpdk/spdk_pid139341 00:30:01.004 Removing: /var/run/dpdk/spdk_pid140821 00:30:01.004 Removing: /var/run/dpdk/spdk_pid141471 00:30:01.004 Removing: /var/run/dpdk/spdk_pid142067 00:30:01.004 Removing: /var/run/dpdk/spdk_pid142812 00:30:01.004 Removing: /var/run/dpdk/spdk_pid142856 00:30:01.004 Removing: /var/run/dpdk/spdk_pid142895 00:30:01.004 Removing: /var/run/dpdk/spdk_pid142946 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143068 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143208 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143418 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143712 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143727 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143775 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143797 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143818 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143840 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143860 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143881 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143901 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143921 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143937 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143969 00:30:01.004 Removing: /var/run/dpdk/spdk_pid143984 00:30:01.004 Removing: /var/run/dpdk/spdk_pid144005 00:30:01.004 Removing: /var/run/dpdk/spdk_pid144025 00:30:01.004 Removing: /var/run/dpdk/spdk_pid144047 00:30:01.004 Removing: /var/run/dpdk/spdk_pid144068 00:30:01.004 Removing: /var/run/dpdk/spdk_pid144088 00:30:01.004 Removing: /var/run/dpdk/spdk_pid144107 00:30:01.004 Removing: /var/run/dpdk/spdk_pid144124 00:30:01.004 Removing: /var/run/dpdk/spdk_pid144164 00:30:01.004 Removing: /var/run/dpdk/spdk_pid144188 00:30:01.004 Removing: /var/run/dpdk/spdk_pid144223 00:30:01.004 Removing: /var/run/dpdk/spdk_pid144292 00:30:01.004 Removing: /var/run/dpdk/spdk_pid144334 00:30:01.004 Removing: /var/run/dpdk/spdk_pid144351 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144382 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144402 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144413 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144470 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144488 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144520 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144532 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144550 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144563 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144581 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144599 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144612 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144622 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144661 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144696 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144717 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144756 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144770 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144779 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144831 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144850 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144888 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144908 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144920 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144931 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144947 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144960 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144977 00:30:01.264 Removing: /var/run/dpdk/spdk_pid144994 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145079 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145137 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145247 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145270 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145318 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145364 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145396 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145414 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145436 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145473 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145495 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145578 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145631 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145676 00:30:01.264 Removing: /var/run/dpdk/spdk_pid145936 00:30:01.264 Removing: /var/run/dpdk/spdk_pid146048 00:30:01.264 Removing: /var/run/dpdk/spdk_pid146082 00:30:01.264 Removing: /var/run/dpdk/spdk_pid146176 00:30:01.264 Removing: /var/run/dpdk/spdk_pid146249 00:30:01.264 Removing: /var/run/dpdk/spdk_pid146281 00:30:01.264 Removing: /var/run/dpdk/spdk_pid146521 00:30:01.264 Removing: /var/run/dpdk/spdk_pid146650 00:30:01.264 Removing: /var/run/dpdk/spdk_pid146746 00:30:01.264 Removing: /var/run/dpdk/spdk_pid146790 00:30:01.264 Removing: /var/run/dpdk/spdk_pid146829 00:30:01.264 Removing: /var/run/dpdk/spdk_pid146897 00:30:01.264 Removing: /var/run/dpdk/spdk_pid147319 00:30:01.264 Removing: /var/run/dpdk/spdk_pid147359 00:30:01.264 Removing: /var/run/dpdk/spdk_pid147653 00:30:01.264 Removing: /var/run/dpdk/spdk_pid147759 00:30:01.264 Removing: /var/run/dpdk/spdk_pid147861 00:30:01.264 Removing: /var/run/dpdk/spdk_pid147905 00:30:01.526 Removing: /var/run/dpdk/spdk_pid147944 00:30:01.526 Removing: /var/run/dpdk/spdk_pid147967 00:30:01.526 Removing: /var/run/dpdk/spdk_pid149304 00:30:01.526 Removing: /var/run/dpdk/spdk_pid149424 00:30:01.526 Removing: /var/run/dpdk/spdk_pid149430 00:30:01.526 Removing: /var/run/dpdk/spdk_pid149447 00:30:01.526 Removing: /var/run/dpdk/spdk_pid149957 00:30:01.526 Removing: /var/run/dpdk/spdk_pid150053 00:30:01.526 Removing: /var/run/dpdk/spdk_pid150180 00:30:01.526 Removing: /var/run/dpdk/spdk_pid150234 00:30:01.526 Removing: /var/run/dpdk/spdk_pid150272 00:30:01.526 Removing: /var/run/dpdk/spdk_pid150543 00:30:01.526 Removing: /var/run/dpdk/spdk_pid150719 00:30:01.526 Removing: /var/run/dpdk/spdk_pid150815 00:30:01.526 Removing: /var/run/dpdk/spdk_pid150909 00:30:01.526 Removing: /var/run/dpdk/spdk_pid150959 00:30:01.526 Removing: /var/run/dpdk/spdk_pid150990 00:30:01.526 Clean 00:30:01.526 killing process with pid 104055 00:30:01.526 killing process with pid 104065 00:30:01.526 16:47:32 -- common/autotest_common.sh@1436 -- # return 0 00:30:01.788 16:47:32 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:30:01.788 16:47:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:01.788 16:47:32 -- common/autotest_common.sh@10 -- # set +x 00:30:01.788 16:47:33 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:30:01.788 16:47:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:01.788 16:47:33 -- common/autotest_common.sh@10 -- # set +x 00:30:01.788 16:47:33 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:01.788 16:47:33 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:01.788 16:47:33 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:01.788 16:47:33 -- spdk/autotest.sh@394 -- # hash lcov 00:30:01.788 16:47:33 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:01.788 16:47:33 -- spdk/autotest.sh@396 -- # hostname 00:30:01.788 16:47:33 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:02.047 geninfo: WARNING: invalid characters removed from testname! 00:30:40.764 16:48:10 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:44.991 16:48:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:46.944 16:48:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:50.234 16:48:20 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:52.767 16:48:23 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:55.306 16:48:26 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:57.838 16:48:29 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:57.838 16:48:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:57.838 16:48:29 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:57.838 16:48:29 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.838 16:48:29 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.838 16:48:29 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:57.838 16:48:29 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:57.838 16:48:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:57.838 16:48:29 -- paths/export.sh@5 -- $ export PATH 00:30:57.838 16:48:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:57.838 16:48:29 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:57.838 16:48:29 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:57.838 16:48:29 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720889309.XXXXXX 00:30:57.838 16:48:29 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720889309.xJsVfY 00:30:57.838 16:48:29 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:57.838 16:48:29 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:30:57.838 16:48:29 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:30:57.838 16:48:29 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:30:57.838 16:48:29 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:57.838 16:48:29 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:57.838 16:48:29 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:57.838 16:48:29 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:30:57.838 16:48:29 -- common/autotest_common.sh@10 -- $ set +x 00:30:57.838 16:48:29 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:30:57.838 16:48:29 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:57.838 16:48:29 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:57.838 16:48:29 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:57.838 16:48:29 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:57.838 16:48:29 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:57.838 16:48:29 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:30:57.838 16:48:29 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:30:57.838 16:48:29 -- common/autotest_common.sh@10 -- $ set +x 00:30:57.838 16:48:29 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:30:57.838 16:48:29 -- spdk/autopackage.sh@36 -- $ [[ -n v22.11.4 ]] 00:30:57.838 16:48:29 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:30:57.838 16:48:29 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:30:57.838 16:48:29 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:57.838 16:48:29 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:57.838 16:48:29 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:30:57.838 16:48:29 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:30:57.838 16:48:29 -- spdk/autopackage.sh@40 -- $ get_config_params 00:30:57.838 16:48:29 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:30:57.838 16:48:29 -- common/autotest_common.sh@10 -- $ set +x 00:30:57.838 16:48:29 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:30:58.098 16:48:29 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:30:58.098 16:48:29 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto 00:30:58.098 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:30:58.098 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:30:58.098 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:30:58.098 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:30:58.666 Using 'verbs' RDMA provider 00:31:14.127 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:31:26.344 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:31:26.603 Creating mk/config.mk...done. 00:31:26.603 Creating mk/cc.flags.mk...done. 00:31:26.603 Type 'make' to build. 00:31:26.603 16:48:57 -- spdk/autopackage.sh@43 -- $ make -j10 00:31:26.862 make[1]: Nothing to be done for 'all'. 00:31:27.121 CC lib/log/log_flags.o 00:31:27.121 CC lib/ut/ut.o 00:31:27.121 CC lib/log/log.o 00:31:27.121 CC lib/log/log_deprecated.o 00:31:27.121 CC lib/ut_mock/mock.o 00:31:27.380 LIB libspdk_ut_mock.a 00:31:27.380 LIB libspdk_ut.a 00:31:27.380 LIB libspdk_log.a 00:31:27.639 CC lib/dma/dma.o 00:31:27.639 CC lib/ioat/ioat.o 00:31:27.639 CXX lib/trace_parser/trace.o 00:31:27.639 CC lib/util/base64.o 00:31:27.639 CC lib/util/bit_array.o 00:31:27.639 CC lib/util/cpuset.o 00:31:27.639 CC lib/util/crc16.o 00:31:27.639 CC lib/util/crc32.o 00:31:27.639 CC lib/util/crc32c.o 00:31:27.639 CC lib/vfio_user/host/vfio_user_pci.o 00:31:27.639 CC lib/util/crc32_ieee.o 00:31:27.639 CC lib/util/crc64.o 00:31:27.639 CC lib/util/dif.o 00:31:27.639 CC lib/util/fd.o 00:31:27.639 LIB libspdk_dma.a 00:31:27.897 CC lib/util/file.o 00:31:27.897 CC lib/util/hexlify.o 00:31:27.897 CC lib/vfio_user/host/vfio_user.o 00:31:27.897 LIB libspdk_ioat.a 00:31:27.897 CC lib/util/iov.o 00:31:27.897 CC lib/util/math.o 00:31:27.897 CC lib/util/pipe.o 00:31:27.897 CC lib/util/strerror_tls.o 00:31:27.897 CC lib/util/string.o 00:31:27.897 CC lib/util/uuid.o 00:31:27.897 CC lib/util/fd_group.o 00:31:27.897 CC lib/util/xor.o 00:31:27.897 LIB libspdk_vfio_user.a 00:31:27.897 CC lib/util/zipf.o 00:31:28.156 LIB libspdk_util.a 00:31:28.156 LIB libspdk_trace_parser.a 00:31:28.156 CC lib/vmd/vmd.o 00:31:28.156 CC lib/vmd/led.o 00:31:28.156 CC lib/rdma/common.o 00:31:28.156 CC lib/rdma/rdma_verbs.o 00:31:28.156 CC lib/json/json_parse.o 00:31:28.156 CC lib/json/json_write.o 00:31:28.156 CC lib/json/json_util.o 00:31:28.156 CC lib/conf/conf.o 00:31:28.156 CC lib/idxd/idxd.o 00:31:28.156 CC lib/env_dpdk/env.o 00:31:28.416 CC lib/env_dpdk/memory.o 00:31:28.416 CC lib/env_dpdk/pci.o 00:31:28.416 CC lib/env_dpdk/init.o 00:31:28.416 CC lib/env_dpdk/threads.o 00:31:28.416 LIB libspdk_json.a 00:31:28.416 LIB libspdk_rdma.a 00:31:28.416 CC lib/idxd/idxd_user.o 00:31:28.416 CC lib/env_dpdk/pci_ioat.o 00:31:28.416 LIB libspdk_conf.a 00:31:28.416 CC lib/env_dpdk/pci_virtio.o 00:31:28.416 LIB libspdk_vmd.a 00:31:28.416 CC lib/env_dpdk/pci_vmd.o 00:31:28.416 CC lib/env_dpdk/pci_idxd.o 00:31:28.675 CC lib/env_dpdk/pci_event.o 00:31:28.675 CC lib/env_dpdk/sigbus_handler.o 00:31:28.675 CC lib/env_dpdk/pci_dpdk.o 00:31:28.675 LIB libspdk_idxd.a 00:31:28.675 CC lib/env_dpdk/pci_dpdk_2207.o 00:31:28.675 CC lib/env_dpdk/pci_dpdk_2211.o 00:31:28.675 CC lib/jsonrpc/jsonrpc_server.o 00:31:28.675 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:31:28.675 CC lib/jsonrpc/jsonrpc_client.o 00:31:28.675 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:31:28.934 LIB libspdk_jsonrpc.a 00:31:28.934 CC lib/rpc/rpc.o 00:31:28.934 LIB libspdk_env_dpdk.a 00:31:29.193 LIB libspdk_rpc.a 00:31:29.193 CC lib/sock/sock.o 00:31:29.193 CC lib/sock/sock_rpc.o 00:31:29.451 CC lib/trace/trace.o 00:31:29.451 CC lib/trace/trace_flags.o 00:31:29.451 CC lib/trace/trace_rpc.o 00:31:29.451 CC lib/notify/notify.o 00:31:29.451 CC lib/notify/notify_rpc.o 00:31:29.451 LIB libspdk_trace.a 00:31:29.451 LIB libspdk_notify.a 00:31:29.451 LIB libspdk_sock.a 00:31:29.710 CC lib/thread/thread.o 00:31:29.710 CC lib/thread/iobuf.o 00:31:29.710 CC lib/nvme/nvme_ctrlr_cmd.o 00:31:29.710 CC lib/nvme/nvme_ctrlr.o 00:31:29.710 CC lib/nvme/nvme_fabric.o 00:31:29.710 CC lib/nvme/nvme_ns.o 00:31:29.710 CC lib/nvme/nvme_ns_cmd.o 00:31:29.711 CC lib/nvme/nvme_pcie.o 00:31:29.711 CC lib/nvme/nvme_qpair.o 00:31:29.711 CC lib/nvme/nvme_pcie_common.o 00:31:29.711 CC lib/nvme/nvme.o 00:31:30.278 LIB libspdk_thread.a 00:31:30.278 CC lib/nvme/nvme_quirks.o 00:31:30.278 CC lib/nvme/nvme_transport.o 00:31:30.278 CC lib/nvme/nvme_discovery.o 00:31:30.278 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:31:30.278 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:31:30.278 CC lib/nvme/nvme_tcp.o 00:31:30.278 CC lib/nvme/nvme_opal.o 00:31:30.278 CC lib/nvme/nvme_io_msg.o 00:31:30.537 CC lib/accel/accel.o 00:31:30.537 CC lib/nvme/nvme_poll_group.o 00:31:30.537 CC lib/nvme/nvme_zns.o 00:31:30.537 CC lib/blob/blobstore.o 00:31:30.537 CC lib/init/json_config.o 00:31:30.537 CC lib/nvme/nvme_cuse.o 00:31:30.537 CC lib/init/subsystem.o 00:31:30.797 CC lib/virtio/virtio.o 00:31:30.797 CC lib/nvme/nvme_vfio_user.o 00:31:30.797 CC lib/init/subsystem_rpc.o 00:31:30.797 CC lib/nvme/nvme_rdma.o 00:31:30.797 CC lib/accel/accel_rpc.o 00:31:30.797 CC lib/virtio/virtio_vhost_user.o 00:31:30.797 CC lib/init/rpc.o 00:31:30.797 CC lib/virtio/virtio_vfio_user.o 00:31:31.055 CC lib/accel/accel_sw.o 00:31:31.055 CC lib/virtio/virtio_pci.o 00:31:31.055 LIB libspdk_init.a 00:31:31.055 CC lib/blob/request.o 00:31:31.055 CC lib/blob/zeroes.o 00:31:31.055 CC lib/blob/blob_bs_dev.o 00:31:31.055 LIB libspdk_accel.a 00:31:31.055 CC lib/event/app.o 00:31:31.055 CC lib/event/reactor.o 00:31:31.055 LIB libspdk_virtio.a 00:31:31.056 CC lib/event/log_rpc.o 00:31:31.056 CC lib/event/app_rpc.o 00:31:31.056 CC lib/event/scheduler_static.o 00:31:31.314 CC lib/bdev/bdev_rpc.o 00:31:31.314 CC lib/bdev/bdev.o 00:31:31.314 CC lib/bdev/bdev_zone.o 00:31:31.314 CC lib/bdev/part.o 00:31:31.314 CC lib/bdev/scsi_nvme.o 00:31:31.314 LIB libspdk_event.a 00:31:31.314 LIB libspdk_nvme.a 00:31:31.571 LIB libspdk_blob.a 00:31:31.829 CC lib/blobfs/blobfs.o 00:31:31.829 CC lib/blobfs/tree.o 00:31:31.829 CC lib/lvol/lvol.o 00:31:32.087 LIB libspdk_blobfs.a 00:31:32.087 LIB libspdk_bdev.a 00:31:32.345 LIB libspdk_lvol.a 00:31:32.345 CC lib/nvmf/ctrlr_discovery.o 00:31:32.345 CC lib/scsi/dev.o 00:31:32.345 CC lib/nvmf/ctrlr.o 00:31:32.345 CC lib/scsi/port.o 00:31:32.345 CC lib/nvmf/subsystem.o 00:31:32.345 CC lib/scsi/lun.o 00:31:32.345 CC lib/nvmf/ctrlr_bdev.o 00:31:32.345 CC lib/nvmf/nvmf.o 00:31:32.345 CC lib/nbd/nbd.o 00:31:32.345 CC lib/ftl/ftl_core.o 00:31:32.345 CC lib/ftl/ftl_init.o 00:31:32.345 CC lib/ftl/ftl_layout.o 00:31:32.345 CC lib/ftl/ftl_debug.o 00:31:32.604 CC lib/scsi/scsi.o 00:31:32.604 CC lib/nvmf/nvmf_rpc.o 00:31:32.604 CC lib/nvmf/transport.o 00:31:32.604 CC lib/nvmf/tcp.o 00:31:32.604 CC lib/scsi/scsi_bdev.o 00:31:32.604 CC lib/nbd/nbd_rpc.o 00:31:32.604 CC lib/scsi/scsi_pr.o 00:31:32.604 CC lib/scsi/scsi_rpc.o 00:31:32.604 CC lib/ftl/ftl_io.o 00:31:32.604 LIB libspdk_nbd.a 00:31:32.604 CC lib/nvmf/rdma.o 00:31:32.862 CC lib/scsi/task.o 00:31:32.862 CC lib/ftl/ftl_sb.o 00:31:32.862 CC lib/ftl/ftl_l2p.o 00:31:32.862 CC lib/ftl/ftl_l2p_flat.o 00:31:32.862 CC lib/ftl/ftl_nv_cache.o 00:31:32.862 CC lib/ftl/ftl_band.o 00:31:32.862 CC lib/ftl/ftl_band_ops.o 00:31:32.862 CC lib/ftl/ftl_writer.o 00:31:32.862 LIB libspdk_scsi.a 00:31:32.862 CC lib/ftl/ftl_rq.o 00:31:32.862 CC lib/ftl/ftl_reloc.o 00:31:32.862 CC lib/iscsi/conn.o 00:31:33.121 CC lib/vhost/vhost.o 00:31:33.121 CC lib/vhost/vhost_rpc.o 00:31:33.121 CC lib/vhost/vhost_scsi.o 00:31:33.121 CC lib/vhost/vhost_blk.o 00:31:33.121 CC lib/vhost/rte_vhost_user.o 00:31:33.121 CC lib/ftl/ftl_l2p_cache.o 00:31:33.121 CC lib/iscsi/init_grp.o 00:31:33.121 CC lib/iscsi/iscsi.o 00:31:33.121 CC lib/ftl/ftl_p2l.o 00:31:33.379 CC lib/iscsi/md5.o 00:31:33.379 LIB libspdk_nvmf.a 00:31:33.379 CC lib/iscsi/param.o 00:31:33.379 CC lib/iscsi/portal_grp.o 00:31:33.379 CC lib/ftl/mngt/ftl_mngt.o 00:31:33.379 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:31:33.379 CC lib/iscsi/tgt_node.o 00:31:33.379 CC lib/iscsi/iscsi_subsystem.o 00:31:33.636 CC lib/iscsi/iscsi_rpc.o 00:31:33.636 CC lib/iscsi/task.o 00:31:33.636 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:31:33.636 CC lib/ftl/mngt/ftl_mngt_startup.o 00:31:33.636 CC lib/ftl/mngt/ftl_mngt_md.o 00:31:33.636 CC lib/ftl/mngt/ftl_mngt_misc.o 00:31:33.636 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:31:33.636 LIB libspdk_vhost.a 00:31:33.636 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:31:33.636 CC lib/ftl/mngt/ftl_mngt_band.o 00:31:33.636 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:31:33.636 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:31:33.636 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:31:33.636 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:31:33.636 LIB libspdk_iscsi.a 00:31:33.636 CC lib/ftl/utils/ftl_conf.o 00:31:33.894 CC lib/ftl/utils/ftl_md.o 00:31:33.895 CC lib/ftl/utils/ftl_mempool.o 00:31:33.895 CC lib/ftl/utils/ftl_bitmap.o 00:31:33.895 CC lib/ftl/utils/ftl_property.o 00:31:33.895 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:31:33.895 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:31:33.895 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:31:33.895 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:31:33.895 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:31:33.895 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:31:33.895 CC lib/ftl/upgrade/ftl_sb_v3.o 00:31:33.895 CC lib/ftl/upgrade/ftl_sb_v5.o 00:31:33.895 CC lib/ftl/nvc/ftl_nvc_dev.o 00:31:33.895 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:31:33.895 CC lib/ftl/base/ftl_base_dev.o 00:31:34.153 CC lib/ftl/base/ftl_base_bdev.o 00:31:34.153 LIB libspdk_ftl.a 00:31:34.412 CC module/env_dpdk/env_dpdk_rpc.o 00:31:34.412 CC module/accel/error/accel_error.o 00:31:34.412 CC module/accel/dsa/accel_dsa.o 00:31:34.412 CC module/blob/bdev/blob_bdev.o 00:31:34.412 CC module/scheduler/dynamic/scheduler_dynamic.o 00:31:34.412 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:31:34.412 CC module/accel/iaa/accel_iaa.o 00:31:34.412 CC module/scheduler/gscheduler/gscheduler.o 00:31:34.412 CC module/accel/ioat/accel_ioat.o 00:31:34.412 CC module/sock/posix/posix.o 00:31:34.412 LIB libspdk_env_dpdk_rpc.a 00:31:34.412 CC module/accel/iaa/accel_iaa_rpc.o 00:31:34.670 LIB libspdk_scheduler_gscheduler.a 00:31:34.670 CC module/accel/error/accel_error_rpc.o 00:31:34.670 LIB libspdk_scheduler_dynamic.a 00:31:34.670 LIB libspdk_scheduler_dpdk_governor.a 00:31:34.670 CC module/accel/dsa/accel_dsa_rpc.o 00:31:34.670 LIB libspdk_blob_bdev.a 00:31:34.670 CC module/accel/ioat/accel_ioat_rpc.o 00:31:34.670 LIB libspdk_accel_iaa.a 00:31:34.670 LIB libspdk_accel_error.a 00:31:34.670 LIB libspdk_accel_dsa.a 00:31:34.670 LIB libspdk_accel_ioat.a 00:31:34.670 CC module/blobfs/bdev/blobfs_bdev.o 00:31:34.670 CC module/bdev/malloc/bdev_malloc.o 00:31:34.670 CC module/bdev/error/vbdev_error.o 00:31:34.670 CC module/bdev/delay/vbdev_delay.o 00:31:34.670 CC module/bdev/delay/vbdev_delay_rpc.o 00:31:34.670 CC module/bdev/gpt/gpt.o 00:31:34.670 CC module/bdev/lvol/vbdev_lvol.o 00:31:34.670 CC module/bdev/null/bdev_null.o 00:31:34.670 CC module/bdev/nvme/bdev_nvme.o 00:31:34.929 LIB libspdk_sock_posix.a 00:31:34.929 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:31:34.929 CC module/bdev/gpt/vbdev_gpt.o 00:31:34.929 CC module/bdev/error/vbdev_error_rpc.o 00:31:34.929 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:31:34.929 CC module/bdev/malloc/bdev_malloc_rpc.o 00:31:34.929 CC module/bdev/nvme/bdev_nvme_rpc.o 00:31:34.929 LIB libspdk_bdev_delay.a 00:31:34.929 CC module/bdev/null/bdev_null_rpc.o 00:31:34.929 LIB libspdk_bdev_error.a 00:31:34.929 LIB libspdk_blobfs_bdev.a 00:31:34.929 CC module/bdev/nvme/nvme_rpc.o 00:31:34.929 CC module/bdev/nvme/bdev_mdns_client.o 00:31:34.929 CC module/bdev/nvme/vbdev_opal.o 00:31:34.929 LIB libspdk_bdev_gpt.a 00:31:34.929 LIB libspdk_bdev_malloc.a 00:31:34.929 CC module/bdev/passthru/vbdev_passthru.o 00:31:34.929 LIB libspdk_bdev_lvol.a 00:31:34.929 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:31:35.189 LIB libspdk_bdev_null.a 00:31:35.189 CC module/bdev/nvme/vbdev_opal_rpc.o 00:31:35.189 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:31:35.189 CC module/bdev/raid/bdev_raid.o 00:31:35.189 LIB libspdk_bdev_passthru.a 00:31:35.189 CC module/bdev/split/vbdev_split.o 00:31:35.189 CC module/bdev/split/vbdev_split_rpc.o 00:31:35.189 CC module/bdev/zone_block/vbdev_zone_block.o 00:31:35.189 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:31:35.189 CC module/bdev/aio/bdev_aio.o 00:31:35.189 CC module/bdev/ftl/bdev_ftl.o 00:31:35.189 CC module/bdev/iscsi/bdev_iscsi.o 00:31:35.189 CC module/bdev/virtio/bdev_virtio_scsi.o 00:31:35.448 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:31:35.449 CC module/bdev/aio/bdev_aio_rpc.o 00:31:35.449 LIB libspdk_bdev_split.a 00:31:35.449 CC module/bdev/raid/bdev_raid_rpc.o 00:31:35.449 LIB libspdk_bdev_zone_block.a 00:31:35.449 CC module/bdev/raid/bdev_raid_sb.o 00:31:35.449 CC module/bdev/ftl/bdev_ftl_rpc.o 00:31:35.449 CC module/bdev/virtio/bdev_virtio_blk.o 00:31:35.449 CC module/bdev/virtio/bdev_virtio_rpc.o 00:31:35.449 LIB libspdk_bdev_aio.a 00:31:35.449 LIB libspdk_bdev_iscsi.a 00:31:35.449 CC module/bdev/raid/raid0.o 00:31:35.449 CC module/bdev/raid/raid1.o 00:31:35.449 CC module/bdev/raid/concat.o 00:31:35.449 CC module/bdev/raid/raid5f.o 00:31:35.449 LIB libspdk_bdev_nvme.a 00:31:35.708 LIB libspdk_bdev_ftl.a 00:31:35.708 LIB libspdk_bdev_virtio.a 00:31:35.708 LIB libspdk_bdev_raid.a 00:31:35.968 CC module/event/subsystems/scheduler/scheduler.o 00:31:35.968 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:31:35.968 CC module/event/subsystems/iobuf/iobuf.o 00:31:35.968 CC module/event/subsystems/sock/sock.o 00:31:35.968 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:31:35.968 CC module/event/subsystems/vmd/vmd.o 00:31:35.968 CC module/event/subsystems/vmd/vmd_rpc.o 00:31:36.227 LIB libspdk_event_sock.a 00:31:36.227 LIB libspdk_event_vhost_blk.a 00:31:36.227 LIB libspdk_event_scheduler.a 00:31:36.227 LIB libspdk_event_iobuf.a 00:31:36.227 LIB libspdk_event_vmd.a 00:31:36.227 CC module/event/subsystems/accel/accel.o 00:31:36.487 LIB libspdk_event_accel.a 00:31:36.746 CC module/event/subsystems/bdev/bdev.o 00:31:36.746 LIB libspdk_event_bdev.a 00:31:37.005 CC module/event/subsystems/scsi/scsi.o 00:31:37.005 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:31:37.005 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:31:37.005 CC module/event/subsystems/nbd/nbd.o 00:31:37.005 LIB libspdk_event_scsi.a 00:31:37.005 LIB libspdk_event_nbd.a 00:31:37.265 LIB libspdk_event_nvmf.a 00:31:37.265 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:31:37.265 CC module/event/subsystems/iscsi/iscsi.o 00:31:37.535 LIB libspdk_event_vhost_scsi.a 00:31:37.535 LIB libspdk_event_iscsi.a 00:31:37.535 CXX app/trace/trace.o 00:31:37.535 CC app/trace_record/trace_record.o 00:31:37.827 CC app/iscsi_tgt/iscsi_tgt.o 00:31:37.827 CC app/nvmf_tgt/nvmf_main.o 00:31:37.827 CC examples/nvme/hello_world/hello_world.o 00:31:37.827 CC examples/ioat/perf/perf.o 00:31:37.827 CC examples/accel/perf/accel_perf.o 00:31:37.827 CC test/accel/dif/dif.o 00:31:37.827 CC examples/blob/hello_world/hello_blob.o 00:31:37.827 CC examples/bdev/hello_world/hello_bdev.o 00:31:37.827 LINK nvmf_tgt 00:31:37.827 LINK spdk_trace_record 00:31:37.827 LINK ioat_perf 00:31:37.827 LINK hello_world 00:31:37.827 LINK iscsi_tgt 00:31:38.120 LINK hello_blob 00:31:38.120 LINK spdk_trace 00:31:38.120 LINK accel_perf 00:31:38.120 LINK dif 00:31:38.120 LINK hello_bdev 00:31:40.662 CC app/spdk_tgt/spdk_tgt.o 00:31:41.230 LINK spdk_tgt 00:31:42.613 CC examples/ioat/verify/verify.o 00:31:43.178 LINK verify 00:31:45.708 CC test/app/bdev_svc/bdev_svc.o 00:31:46.642 LINK bdev_svc 00:31:53.201 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:31:54.137 LINK nvme_fuzz 00:31:55.514 CC test/app/histogram_perf/histogram_perf.o 00:31:56.083 LINK histogram_perf 00:31:56.083 CC examples/nvme/reconnect/reconnect.o 00:31:57.461 LINK reconnect 00:32:15.546 CC examples/nvme/nvme_manage/nvme_manage.o 00:32:15.546 LINK nvme_manage 00:32:42.090 CC examples/sock/hello_world/hello_sock.o 00:32:42.090 LINK hello_sock 00:32:42.090 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:32:46.276 LINK iscsi_fuzz 00:33:04.371 CC examples/nvme/arbitration/arbitration.o 00:33:04.371 LINK arbitration 00:33:05.750 CC examples/bdev/bdevperf/bdevperf.o 00:33:10.006 LINK bdevperf 00:33:22.215 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:33:22.215 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:33:24.130 LINK vhost_fuzz 00:33:29.409 CC examples/vmd/lsvmd/lsvmd.o 00:33:29.668 LINK lsvmd 00:33:30.608 CC examples/vmd/led/led.o 00:33:31.178 LINK led 00:33:31.178 CC examples/blob/cli/blobcli.o 00:33:33.081 LINK blobcli 00:33:36.367 CC examples/nvme/hotplug/hotplug.o 00:33:36.935 LINK hotplug 00:33:38.311 CC app/spdk_lspci/spdk_lspci.o 00:33:38.571 CC app/spdk_nvme_perf/perf.o 00:33:38.830 LINK spdk_lspci 00:33:40.738 LINK spdk_nvme_perf 00:33:44.027 CC test/app/jsoncat/jsoncat.o 00:33:44.286 LINK jsoncat 00:33:45.223 CC test/app/stub/stub.o 00:33:46.160 LINK stub 00:33:51.436 CC app/spdk_nvme_identify/identify.o 00:33:53.965 CC app/spdk_nvme_discover/discovery_aer.o 00:33:53.965 LINK spdk_nvme_identify 00:33:54.899 LINK spdk_nvme_discover 00:33:58.179 CC app/spdk_top/spdk_top.o 00:34:01.466 LINK spdk_top 00:34:04.897 CC test/bdev/bdevio/bdevio.o 00:34:06.273 LINK bdevio 00:34:08.176 CC examples/nvme/cmb_copy/cmb_copy.o 00:34:09.553 LINK cmb_copy 00:34:12.843 CC test/blobfs/mkfs/mkfs.o 00:34:13.409 LINK mkfs 00:34:13.668 TEST_HEADER include/spdk/config.h 00:34:13.668 CXX test/cpp_headers/accel.o 00:34:14.604 CXX test/cpp_headers/accel_module.o 00:34:14.604 CC test/dma/test_dma/test_dma.o 00:34:15.171 CXX test/cpp_headers/assert.o 00:34:15.739 CXX test/cpp_headers/barrier.o 00:34:15.739 LINK test_dma 00:34:16.675 CXX test/cpp_headers/base64.o 00:34:17.241 CXX test/cpp_headers/bdev.o 00:34:18.174 CXX test/cpp_headers/bdev_module.o 00:34:18.432 CC app/vhost/vhost.o 00:34:19.367 CXX test/cpp_headers/bdev_zone.o 00:34:19.367 LINK vhost 00:34:19.932 CC examples/nvmf/nvmf/nvmf.o 00:34:19.932 CXX test/cpp_headers/bit_array.o 00:34:20.866 CXX test/cpp_headers/bit_pool.o 00:34:20.866 LINK nvmf 00:34:21.430 CXX test/cpp_headers/blob.o 00:34:21.996 CXX test/cpp_headers/blob_bdev.o 00:34:22.561 CXX test/cpp_headers/blobfs.o 00:34:23.127 CXX test/cpp_headers/blobfs_bdev.o 00:34:24.063 CXX test/cpp_headers/conf.o 00:34:25.001 CXX test/cpp_headers/config.o 00:34:25.001 CXX test/cpp_headers/cpuset.o 00:34:25.939 CXX test/cpp_headers/crc16.o 00:34:26.508 CXX test/cpp_headers/crc32.o 00:34:27.447 CXX test/cpp_headers/crc64.o 00:34:28.015 CXX test/cpp_headers/dif.o 00:34:28.952 CXX test/cpp_headers/dma.o 00:34:29.889 CC examples/nvme/abort/abort.o 00:34:29.889 CXX test/cpp_headers/endian.o 00:34:30.825 CXX test/cpp_headers/env.o 00:34:31.391 LINK abort 00:34:31.391 CXX test/cpp_headers/env_dpdk.o 00:34:31.648 CXX test/cpp_headers/event.o 00:34:32.583 CXX test/cpp_headers/fd.o 00:34:33.151 CC examples/util/zipf/zipf.o 00:34:33.411 CXX test/cpp_headers/fd_group.o 00:34:33.670 LINK zipf 00:34:34.239 CXX test/cpp_headers/file.o 00:34:35.178 CXX test/cpp_headers/ftl.o 00:34:36.558 CXX test/cpp_headers/gpt_spec.o 00:34:37.978 CXX test/cpp_headers/hexlify.o 00:34:38.915 CXX test/cpp_headers/histogram_data.o 00:34:39.481 CXX test/cpp_headers/idxd.o 00:34:40.416 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:34:40.675 CXX test/cpp_headers/idxd_spec.o 00:34:41.612 LINK pmr_persistence 00:34:41.869 CXX test/cpp_headers/init.o 00:34:42.803 CXX test/cpp_headers/ioat.o 00:34:44.177 CXX test/cpp_headers/ioat_spec.o 00:34:45.112 CXX test/cpp_headers/iscsi_spec.o 00:34:45.679 CXX test/cpp_headers/json.o 00:34:47.055 CXX test/cpp_headers/jsonrpc.o 00:34:47.314 CC examples/thread/thread/thread_ex.o 00:34:47.880 CXX test/cpp_headers/likely.o 00:34:48.816 LINK thread 00:34:49.075 CXX test/cpp_headers/log.o 00:34:50.011 CXX test/cpp_headers/lvol.o 00:34:51.390 CXX test/cpp_headers/memory.o 00:34:52.764 CXX test/cpp_headers/mmio.o 00:34:54.142 CXX test/cpp_headers/nbd.o 00:34:54.142 CXX test/cpp_headers/notify.o 00:34:55.521 CXX test/cpp_headers/nvme.o 00:34:56.900 CXX test/cpp_headers/nvme_intel.o 00:34:57.836 CXX test/cpp_headers/nvme_ocssd.o 00:34:59.214 CXX test/cpp_headers/nvme_ocssd_spec.o 00:35:00.149 CXX test/cpp_headers/nvme_spec.o 00:35:01.084 CXX test/cpp_headers/nvme_zns.o 00:35:02.017 CXX test/cpp_headers/nvmf.o 00:35:02.952 CXX test/cpp_headers/nvmf_cmd.o 00:35:03.211 CXX test/cpp_headers/nvmf_fc_spec.o 00:35:04.588 CXX test/cpp_headers/nvmf_spec.o 00:35:04.588 CC examples/idxd/perf/perf.o 00:35:05.524 CXX test/cpp_headers/nvmf_transport.o 00:35:06.092 LINK idxd_perf 00:35:06.662 CXX test/cpp_headers/opal.o 00:35:07.599 CC app/spdk_dd/spdk_dd.o 00:35:07.857 CXX test/cpp_headers/opal_spec.o 00:35:09.233 CXX test/cpp_headers/pci_ids.o 00:35:09.233 LINK spdk_dd 00:35:10.169 CXX test/cpp_headers/pipe.o 00:35:10.735 CXX test/cpp_headers/queue.o 00:35:10.994 CXX test/cpp_headers/reduce.o 00:35:11.985 CXX test/cpp_headers/rpc.o 00:35:12.920 CXX test/cpp_headers/scheduler.o 00:35:13.856 CXX test/cpp_headers/scsi.o 00:35:15.230 CXX test/cpp_headers/scsi_spec.o 00:35:16.166 CXX test/cpp_headers/sock.o 00:35:17.101 CXX test/cpp_headers/stdinc.o 00:35:17.359 CC examples/interrupt_tgt/interrupt_tgt.o 00:35:17.928 CXX test/cpp_headers/string.o 00:35:18.187 LINK interrupt_tgt 00:35:19.124 CXX test/cpp_headers/thread.o 00:35:20.062 CXX test/cpp_headers/trace.o 00:35:20.999 CXX test/cpp_headers/trace_parser.o 00:35:20.999 CXX test/cpp_headers/tree.o 00:35:21.936 CC app/fio/nvme/fio_plugin.o 00:35:21.936 CXX test/cpp_headers/ublk.o 00:35:22.871 CXX test/cpp_headers/util.o 00:35:23.806 LINK spdk_nvme 00:35:23.806 CXX test/cpp_headers/uuid.o 00:35:24.742 CXX test/cpp_headers/version.o 00:35:25.001 CXX test/cpp_headers/vfio_user_pci.o 00:35:25.938 CXX test/cpp_headers/vfio_user_spec.o 00:35:27.316 CXX test/cpp_headers/vhost.o 00:35:28.694 CXX test/cpp_headers/vmd.o 00:35:30.076 CXX test/cpp_headers/xor.o 00:35:31.015 CXX test/cpp_headers/zipf.o 00:35:33.562 CC test/env/mem_callbacks/mem_callbacks.o 00:35:34.500 LINK mem_callbacks 00:35:41.068 CC test/env/vtophys/vtophys.o 00:35:41.641 LINK vtophys 00:35:42.208 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:35:43.584 LINK env_dpdk_post_init 00:35:58.491 CC test/event/event_perf/event_perf.o 00:35:58.491 LINK event_perf 00:35:58.491 CC test/lvol/esnap/esnap.o 00:35:59.864 CC test/env/memory/memory_ut.o 00:36:01.888 LINK memory_ut 00:36:10.006 LINK esnap 00:36:10.575 CC test/nvme/aer/aer.o 00:36:11.143 LINK aer 00:36:11.403 CC app/fio/bdev/fio_plugin.o 00:36:12.777 LINK spdk_bdev 00:36:13.709 CC test/env/pci/pci_ut.o 00:36:14.646 LINK pci_ut 00:36:14.905 CC test/event/reactor/reactor.o 00:36:15.164 CC test/nvme/reset/reset.o 00:36:15.732 LINK reactor 00:36:16.300 LINK reset 00:36:21.573 CC test/nvme/sgl/sgl.o 00:36:21.573 CC test/nvme/e2edp/nvme_dp.o 00:36:22.140 LINK sgl 00:36:22.140 LINK nvme_dp 00:36:23.075 CC test/nvme/overhead/overhead.o 00:36:24.452 LINK overhead 00:36:28.649 CC test/event/reactor_perf/reactor_perf.o 00:36:29.217 LINK reactor_perf 00:36:32.505 CC test/event/app_repeat/app_repeat.o 00:36:33.072 LINK app_repeat 00:36:36.361 CC test/nvme/err_injection/err_injection.o 00:36:36.929 CC test/nvme/startup/startup.o 00:36:37.189 LINK err_injection 00:36:37.759 LINK startup 00:36:43.035 CC test/event/scheduler/scheduler.o 00:36:43.035 LINK scheduler 00:36:45.565 CC test/rpc_client/rpc_client_test.o 00:36:46.501 LINK rpc_client_test 00:36:47.458 CC test/nvme/reserve/reserve.o 00:36:48.025 LINK reserve 00:36:48.025 CC test/thread/poller_perf/poller_perf.o 00:36:48.593 LINK poller_perf 00:36:50.028 CC test/nvme/simple_copy/simple_copy.o 00:36:50.595 CC test/nvme/connect_stress/connect_stress.o 00:36:50.854 LINK simple_copy 00:36:51.113 LINK connect_stress 00:37:03.322 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:37:03.322 CC test/unit/lib/accel/accel.c/accel_ut.o 00:37:03.322 LINK histogram_ut 00:37:03.322 CC test/nvme/boot_partition/boot_partition.o 00:37:03.889 LINK boot_partition 00:37:04.456 CC test/nvme/compliance/nvme_compliance.o 00:37:05.023 CC test/thread/lock/spdk_lock.o 00:37:05.591 LINK nvme_compliance 00:37:06.160 CC test/nvme/fused_ordering/fused_ordering.o 00:37:06.160 LINK accel_ut 00:37:07.098 LINK fused_ordering 00:37:08.478 LINK spdk_lock 00:37:11.767 CC test/nvme/doorbell_aers/doorbell_aers.o 00:37:12.703 LINK doorbell_aers 00:37:17.978 CC test/nvme/fdp/fdp.o 00:37:17.978 CC test/nvme/cuse/cuse.o 00:37:18.918 LINK fdp 00:37:23.139 LINK cuse 00:37:24.077 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:37:25.983 CC test/unit/lib/bdev/part.c/part_ut.o 00:37:30.178 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:37:31.115 LINK scsi_nvme_ut 00:37:32.051 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:37:32.051 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:37:32.051 LINK part_ut 00:37:32.987 LINK gpt_ut 00:37:33.555 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:37:34.184 LINK vbdev_lvol_ut 00:37:34.467 LINK bdev_ut 00:37:35.410 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:37:36.788 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:37:36.788 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:37:37.727 LINK bdev_zone_ut 00:37:37.727 LINK bdev_ut 00:37:37.986 LINK vbdev_zone_block_ut 00:37:38.246 LINK bdev_raid_ut 00:37:38.505 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:37:39.885 LINK bdev_raid_sb_ut 00:37:39.885 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:37:39.885 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:37:40.146 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:37:41.085 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:37:41.653 LINK concat_ut 00:37:41.653 LINK raid1_ut 00:37:43.557 LINK raid5f_ut 00:37:44.491 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:37:45.867 CC test/unit/lib/blob/blob.c/blob_ut.o 00:37:45.867 LINK blob_bdev_ut 00:37:46.435 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:37:46.435 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:37:46.435 LINK bdev_nvme_ut 00:37:46.435 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:37:46.435 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:37:46.695 LINK tree_ut 00:37:46.953 LINK blobfs_bdev_ut 00:37:47.211 LINK blobfs_sync_ut 00:37:47.470 LINK blobfs_async_ut 00:37:48.407 CC test/unit/lib/dma/dma.c/dma_ut.o 00:37:48.975 CC test/unit/lib/event/app.c/app_ut.o 00:37:48.976 LINK dma_ut 00:37:49.235 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:37:49.235 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:37:50.175 LINK app_ut 00:37:50.175 LINK ioat_ut 00:37:50.743 LINK reactor_ut 00:37:52.119 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:37:53.495 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:37:53.495 LINK conn_ut 00:37:53.753 LINK blob_ut 00:37:54.320 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:37:54.320 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:37:54.320 LINK jsonrpc_server_ut 00:37:54.320 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:37:54.580 LINK json_parse_ut 00:37:54.580 LINK json_util_ut 00:37:54.839 LINK json_write_ut 00:37:55.112 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:37:55.112 CC test/unit/lib/log/log.c/log_ut.o 00:37:55.372 LINK log_ut 00:37:55.372 LINK init_grp_ut 00:37:55.372 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:37:55.631 CC test/unit/lib/notify/notify.c/notify_ut.o 00:37:56.200 LINK notify_ut 00:37:57.138 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:37:57.138 LINK lvol_ut 00:37:57.138 CC test/unit/lib/iscsi/param.c/param_ut.o 00:37:57.707 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:37:57.966 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:37:57.966 LINK param_ut 00:37:58.904 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:37:59.164 LINK nvme_ut 00:37:59.164 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:37:59.424 LINK dev_ut 00:37:59.683 LINK iscsi_ut 00:37:59.943 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:38:00.203 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:38:00.203 LINK nvme_ctrlr_ut 00:38:00.203 LINK tcp_ut 00:38:00.203 LINK scsi_ut 00:38:00.462 LINK lun_ut 00:38:00.722 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:38:00.982 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:38:02.365 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:38:02.623 LINK nvme_ctrlr_cmd_ut 00:38:02.623 LINK nvme_ctrlr_ocssd_cmd_ut 00:38:03.191 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:38:03.758 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:38:04.327 LINK scsi_bdev_ut 00:38:04.327 LINK nvme_ns_ut 00:38:04.600 LINK portal_grp_ut 00:38:04.882 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:38:05.457 LINK tgt_node_ut 00:38:05.457 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:38:05.457 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:38:05.457 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:38:05.715 LINK scsi_pr_ut 00:38:05.715 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:38:05.715 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:38:05.973 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:38:06.231 LINK nvme_ns_cmd_ut 00:38:06.231 LINK nvme_ns_ocssd_cmd_ut 00:38:06.231 LINK nvme_poll_group_ut 00:38:06.489 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:38:06.489 LINK nvme_pcie_ut 00:38:06.748 LINK ctrlr_ut 00:38:06.748 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:38:06.748 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:38:07.007 LINK nvme_qpair_ut 00:38:07.265 LINK nvme_quirks_ut 00:38:08.202 CC test/unit/lib/sock/sock.c/sock_ut.o 00:38:08.462 LINK nvme_tcp_ut 00:38:08.720 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:38:08.720 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:38:08.720 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:38:08.980 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:38:08.980 LINK sock_ut 00:38:09.548 LINK ctrlr_discovery_ut 00:38:09.548 LINK subsystem_ut 00:38:09.548 LINK nvme_transport_ut 00:38:09.548 LINK ctrlr_bdev_ut 00:38:09.548 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:38:09.807 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:38:10.376 CC test/unit/lib/sock/posix.c/posix_ut.o 00:38:10.634 LINK nvmf_ut 00:38:11.203 LINK posix_ut 00:38:11.203 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:38:12.574 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:38:12.574 LINK rdma_ut 00:38:12.574 LINK transport_ut 00:38:12.574 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:38:13.141 CC test/unit/lib/thread/thread.c/thread_ut.o 00:38:13.141 CC test/unit/lib/util/base64.c/base64_ut.o 00:38:13.141 LINK nvme_io_msg_ut 00:38:13.141 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:38:13.141 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:38:13.709 LINK base64_ut 00:38:13.709 LINK pci_event_ut 00:38:13.968 LINK subsystem_ut 00:38:14.227 LINK nvme_pcie_common_ut 00:38:14.487 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:38:15.055 LINK bit_array_ut 00:38:15.055 LINK thread_ut 00:38:15.055 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:38:15.314 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:38:15.314 LINK iobuf_ut 00:38:15.572 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:38:15.572 LINK rpc_ut 00:38:15.831 LINK cpuset_ut 00:38:16.399 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:38:16.967 LINK nvme_fabric_ut 00:38:16.967 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:38:17.227 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:38:17.227 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:38:17.227 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:38:17.227 LINK crc16_ut 00:38:17.227 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:38:17.227 LINK idxd_user_ut 00:38:17.486 LINK nvme_opal_ut 00:38:17.486 LINK idxd_ut 00:38:17.744 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:38:17.744 LINK crc32_ieee_ut 00:38:17.744 CC test/unit/lib/rdma/common.c/common_ut.o 00:38:18.003 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:38:18.003 LINK crc32c_ut 00:38:18.003 LINK common_ut 00:38:18.003 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:38:18.262 LINK vhost_ut 00:38:18.262 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:38:18.262 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:38:18.262 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:38:18.262 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:38:18.262 LINK crc64_ut 00:38:18.262 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:38:18.521 LINK ftl_l2p_ut 00:38:18.780 LINK ftl_io_ut 00:38:18.780 CC test/unit/lib/util/dif.c/dif_ut.o 00:38:19.038 CC test/unit/lib/util/iov.c/iov_ut.o 00:38:19.038 LINK nvme_rdma_ut 00:38:19.038 LINK nvme_cuse_ut 00:38:19.038 LINK ftl_band_ut 00:38:19.298 LINK iov_ut 00:38:19.298 CC test/unit/lib/util/math.c/math_ut.o 00:38:19.557 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:38:19.557 LINK dif_ut 00:38:19.557 LINK math_ut 00:38:19.816 LINK ftl_bitmap_ut 00:38:20.755 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:38:21.015 CC test/unit/lib/util/xor.c/xor_ut.o 00:38:21.015 CC test/unit/lib/util/string.c/string_ut.o 00:38:21.015 LINK pipe_ut 00:38:21.015 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:38:21.015 LINK string_ut 00:38:21.015 LINK xor_ut 00:38:21.274 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:38:21.274 LINK ftl_mempool_ut 00:38:21.274 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:38:21.542 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:38:21.542 LINK ftl_mngt_ut 00:38:21.803 LINK ftl_sb_ut 00:38:22.061 LINK ftl_layout_upgrade_ut 00:38:54.131 json_parse_ut.c: In function ‘test_parse_nesting’: 00:38:54.131 json_parse_ut.c:616:1: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without 00:38:54.131 616 | test_parse_nesting(void) 00:38:54.131 | ^ 00:38:54.131 16:56:22 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:38:54.131 make[1]: Nothing to be done for 'clean'. 00:38:56.037 16:56:27 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:38:56.037 16:56:27 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:38:56.037 16:56:27 -- common/autotest_common.sh@10 -- $ set +x 00:38:56.037 16:56:27 -- spdk/autopackage.sh@48 -- $ timing_finish 00:38:56.037 16:56:27 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:56.037 16:56:27 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:56.037 16:56:27 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:56.037 + [[ -n 2269 ]] 00:38:56.037 + sudo kill 2269 00:38:56.049 [Pipeline] } 00:38:56.069 [Pipeline] // timeout 00:38:56.076 [Pipeline] } 00:38:56.095 [Pipeline] // stage 00:38:56.100 [Pipeline] } 00:38:56.118 [Pipeline] // catchError 00:38:56.128 [Pipeline] stage 00:38:56.130 [Pipeline] { (Stop VM) 00:38:56.144 [Pipeline] sh 00:38:56.420 + vagrant halt 00:38:59.737 ==> default: Halting domain... 00:39:09.773 [Pipeline] sh 00:39:10.056 + vagrant destroy -f 00:39:12.592 ==> default: Removing domain... 00:39:13.540 [Pipeline] sh 00:39:13.820 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:39:13.829 [Pipeline] } 00:39:13.848 [Pipeline] // stage 00:39:13.854 [Pipeline] } 00:39:13.872 [Pipeline] // dir 00:39:13.878 [Pipeline] } 00:39:13.893 [Pipeline] // wrap 00:39:13.899 [Pipeline] } 00:39:13.913 [Pipeline] // catchError 00:39:13.921 [Pipeline] stage 00:39:13.923 [Pipeline] { (Epilogue) 00:39:13.936 [Pipeline] sh 00:39:14.216 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:29.102 [Pipeline] catchError 00:39:29.104 [Pipeline] { 00:39:29.117 [Pipeline] sh 00:39:29.395 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:29.654 Artifacts sizes are good 00:39:29.664 [Pipeline] } 00:39:29.682 [Pipeline] // catchError 00:39:29.695 [Pipeline] archiveArtifacts 00:39:29.702 Archiving artifacts 00:39:30.040 [Pipeline] cleanWs 00:39:30.054 [WS-CLEANUP] Deleting project workspace... 00:39:30.054 [WS-CLEANUP] Deferred wipeout is used... 00:39:30.082 [WS-CLEANUP] done 00:39:30.084 [Pipeline] } 00:39:30.103 [Pipeline] // stage 00:39:30.110 [Pipeline] } 00:39:30.127 [Pipeline] // node 00:39:30.134 [Pipeline] End of Pipeline 00:39:30.169 Finished: SUCCESS